You are currently browsing the category archive for the ‘math.GR’ category.

A popular way to visualise relationships between some finite number of sets is via Venn diagrams, or more generally Euler diagrams. In these diagrams, a set is depicted as a two-dimensional shape such as a disk or a rectangle, and the various Boolean relationships between these sets (e.g., that one set is contained in another, or that the intersection of two of the sets is equal to a third) is represented by the Boolean algebra of these shapes; Venn diagrams correspond to the case where the sets are in “general position” in the sense that all non-trivial Boolean combinations of the sets are non-empty. For instance to depict the general situation of two sets ${A,B}$ together with their intersection ${A \cap B}$ and ${A \cup B}$ one might use a Venn diagram such as

(where we have given each region depicted a different color, and moved the edges of each region a little away from each other in order to make them all visible separately), but if one wanted to instead depict a situation in which the intersection ${A \cap B}$ was empty, one could use an Euler diagram such as

One can use the area of various regions in a Venn or Euler diagram as a heuristic proxy for the cardinality ${|A|}$ (or measure ${\mu(A)}$) of the set ${A}$ corresponding to such a region. For instance, the above Venn diagram can be used to intuitively justify the inclusion-exclusion formula

$\displaystyle |A \cup B| = |A| + |B| - |A \cap B|$

for finite sets ${A,B}$, while the above Euler diagram similarly justifies the special case

$\displaystyle |A \cup B| = |A| + |B|$

for finite disjoint sets ${A,B}$.

While Venn and Euler diagrams are traditionally two-dimensional in nature, there is nothing preventing one from using one-dimensional diagrams such as

or even three-dimensional diagrams such as this one from Wikipedia:

Of course, in such cases one would use length or volume as a heuristic proxy for cardinality or measure, rather than area.

With the addition of arrows, Venn and Euler diagrams can also accommodate (to some extent) functions between sets. Here for instance is a depiction of a function ${f: A \rightarrow B}$, the image ${f(A)}$ of that function, and the image ${f(A')}$ of some subset ${A'}$ of ${A}$:

Here one can illustrate surjectivity of ${f: A \rightarrow B}$ by having ${f(A)}$ fill out all of ${B}$; one can similarly illustrate injectivity of ${f}$ by giving ${f(A)}$ exactly the same shape (or at least the same area) as ${A}$. So here for instance might be how one would illustrate an injective function ${f: A \rightarrow B}$:

Cartesian product operations can be incorporated into these diagrams by appropriate combinations of one-dimensional and two-dimensional diagrams. Here for instance is a diagram that illustrates the identity ${(A \cup B) \times C = (A \times C) \cup (B \times C)}$:

In this blog post I would like to propose a similar family of diagrams to illustrate relationships between vector spaces (over a fixed base field ${k}$, such as the reals) or abelian groups, rather than sets. The categories of (${k}$-)vector spaces and abelian groups are quite similar in many ways; the former consists of modules over a base field ${k}$, while the latter consists of modules over the integers ${{\bf Z}}$; also, both categories are basic examples of abelian categories. The notion of a dimension in a vector space is analogous in many ways to that of cardinality of a set; see this previous post for an instance of this analogy (in the context of Shannon entropy). (UPDATE: I have learned that an essentially identical notation has also been proposed in an unpublished manuscript of Ravi Vakil.)

I’m collecting in this blog post a number of simple group-theoretic lemmas, all of the following flavour: if ${H}$ is a subgroup of some product ${G_1 \times \dots \times G_k}$ of groups, then one of three things has to happen:

• (${H}$ too small) ${H}$ is contained in some proper subgroup ${G'_1 \times \dots \times G'_k}$ of ${G_1 \times \dots \times G_k}$, or the elements of ${H}$ are constrained to some sort of equation that the full group ${G_1 \times \dots \times G_k}$ does not satisfy.
• (${H}$ too large) ${H}$ contains some non-trivial normal subgroup ${N_1 \times \dots \times N_k}$ of ${G_1 \times \dots \times G_k}$, and as such actually arises by pullback from some subgroup of the quotient group ${G_1/N_1 \times \dots \times G_k/N_k}$.
• (Structure) There is some useful structural relationship between ${H}$ and the groups ${G_1,\dots,G_k}$.
These sorts of lemmas show up often in ergodic theory, when the equidistribution of some orbit is governed by some unspecified subgroup ${H}$ of a product group ${G_1 \times \dots \times G_k}$, and one needs to know further information about this subgroup in order to take the analysis further. In some cases only two of the above three options are relevant. In the cases where ${H}$ is too “small” or too “large” one can reduce the groups ${G_1,\dots,G_k}$ to something smaller (either a subgroup or a quotient) and in applications one can often proceed in this case by some induction on the “size” of the groups ${G_1,\dots,G_k}$ (for instance, if these groups are Lie groups, one can often perform an induction on dimension), so it is often the structured case which is the most interesting case to deal with.

It is perhaps easiest to explain the flavour of these lemmas with some simple examples, starting with the ${k=1}$ case where we are just considering subgroups ${H}$ of a single group ${G}$.

Lemma 1 Let ${H}$ be a subgroup of a group ${G}$. Then exactly one of the following hold:
• (i) (${H}$ too small) There exists a non-trivial group homomorphism ${\eta: G \rightarrow K}$ into a group ${K = (K,\cdot)}$ such that ${\eta(h)=1}$ for all ${h \in H}$.
• (ii) (${H}$ normally generates ${G}$) ${G}$ is generated as a group by the conjugates ${gHg^{-1}}$ of ${H}$.

Proof: Let ${G'}$ be the group normally generated by ${H}$, that is to say the group generated by the conjugates ${gHg^{-1}}$ of ${H}$. This is a normal subgroup of ${G}$ containing ${H}$ (indeed it is the smallest such normal subgroup). If ${G'}$ is all of ${G}$ we are in option (ii); otherwise we can take ${K}$ to be the quotient group ${K := G/G'}$ and ${\eta}$ to be the quotient map. Finally, if (i) holds, then all of the conjugates ${gHg^{-1}}$ of ${H}$ lie in the kernel of ${\eta}$, and so (ii) cannot hold. $\Box$

Here is a “dual” to the above lemma:

Lemma 2 Let ${H}$ be a subgroup of a group ${G}$. Then exactly one of the following hold:
• (i) (${H}$ too large) ${H}$ is the pullback ${H = \pi^{-1}(H')}$ of some subgroup ${H'}$ of ${G/N}$ for some non-trivial normal subgroup ${N}$ of ${G}$, where ${\pi: G \rightarrow G/N}$ is the quotient map.
• (ii) (${H}$ is core-free) ${H}$ does not contain any non-trivial conjugacy class ${\{ ghg^{-1}: g \in G \}}$.

Proof: Let ${N}$ be the normal core of ${H}$, that is to say the intersection of all the conjugates ${gHg^{-1}}$ of ${H}$. This is the largest normal subgroup of ${G}$ that is contained in ${H}$. If ${N}$ is non-trivial, we can quotient it out and end up with option (i). If instead ${N}$ is trivial, then there is no non-trivial element ${h}$ that lies in the core, hence no non-trivial conjugacy class lies in ${H}$ and we are in option (ii). Finally, if (i) holds, then every conjugacy class of an element of ${N}$ is contained in ${N}$ and hence in ${H}$, so (ii) cannot hold. $\Box$

For subgroups of nilpotent groups, we have a nice dichotomy that detects properness of a subgroup through abelian representations:

Lemma 3 Let ${H}$ be a subgroup of a nilpotent group ${G}$. Then exactly one of the following hold:
• (i) (${H}$ too small) There exists non-trivial group homomorphism ${\eta: G \rightarrow K}$ into an abelian group ${K = (K,+)}$ such that ${\eta(h)=0}$ for all ${h \in H}$.
• (ii) ${H=G}$.

Informally: if ${h}$ is a variable ranging in a subgroup ${H}$ of a nilpotent group ${G}$, then either ${h}$ is unconstrained (in the sense that it really ranges in all of ${G}$), or it obeys some abelian constraint ${\eta(h)=0}$.

Proof: By definition of nilpotency, the lower central series

$\displaystyle G_2 := [G,G], G_3 := [G,G_2], \dots$

eventually becomes trivial.

Since ${G_2}$ is a normal subgroup of ${G}$, ${HG_2}$ is also a subgroup of ${G}$. Suppose first that ${HG_2}$ is a proper subgroup of ${G}$, then the quotient map ${\eta \colon G \rightarrow G/HG_2}$ is a non-trivial homomorphism to an abelian group ${G/HG_2}$ that annihilates ${H}$, and we are in option (i). Thus we may assume that ${HG_2 = G}$, and thus

$\displaystyle G_2 = [G,G] = [G, HG_2].$

Note that modulo the normal group ${G_3}$, ${G_2}$ commutes with ${G}$, hence

$\displaystyle [G, HG_2] \subset [G,H] G_3 \subset H G_3$

and thus

$\displaystyle G = H G_2 \subset H H G_3 = H G_3.$

We conclude that ${HG_3 = G}$. One can continue this argument by induction to show that ${H G_i = G}$ for every ${i}$; taking ${i}$ large enough we end up in option (ii). Finally, it is clear that (i) and (ii) cannot both hold. $\Box$

Remark 4 When the group ${G}$ is locally compact and ${H}$ is closed, one can take the homomorphism ${\eta}$ in Lemma 3 to be continuous, and by using Pontryagin duality one can also take the target group ${K}$ to be the unit circle ${{\bf R}/{\bf Z}}$. Thus ${\eta}$ is now a character of ${G}$. Similar considerations hold for some of the later lemmas in this post. Discrete versions of this above lemma, in which the group ${H}$ is replaced by some orbit of a polynomial map on a nilmanifold, were obtained by Leibman and are important in the equidistribution theory of nilmanifolds; see this paper of Ben Green and myself for further discussion.

Here is an analogue of Lemma 3 for special linear groups, due to Serre (IV-23):

Lemma 5 Let ${p \geq 5}$ be a prime, and let ${H}$ be a closed subgroup of ${SL_2({\bf Z}_p)}$, where ${{\bf Z}_p}$ is the ring of ${p}$-adic integers. Then exactly one of the following hold:
• (i) (${H}$ too small) There exists a proper subgroup ${H'}$ of ${SL_2({\mathbf F}_p)}$ such that ${h \hbox{ mod } p \in H'}$ for all ${h \in H}$.
• (ii) ${H=SL_2({\bf Z}_p)}$.

Proof: It is a standard fact that the reduction of ${SL_2({\bf Z}_p)}$ mod ${p}$ is ${SL_2({\mathbf F}_p)}$, hence (i) and (ii) cannot both hold.

Suppose that (i) fails, then for every ${g \in SL_2({\bf Z}_p)}$ there exists ${h \in H}$ such that ${h = g \hbox{ mod } p}$, which we write as

$\displaystyle h = g + O(p).$

We now claim inductively that for any ${j \geq 0}$ and ${g \in SL_2({\bf Z}_p)}$, there exists ${h \in SL_2({\bf Z}_p)}$ with ${h = g + O(p^{j+1})}$; taking limits as ${j \rightarrow \infty}$ using the closed nature of ${H}$ will then place us in option (ii).

The case ${j=0}$ is already handled, so now suppose ${j=1}$. If ${g \in SL_2({\bf Z}_p)}$, we see from the ${j=0}$ case that we can write ${g = hg'}$ where ${h \in H}$ and ${g' = 1+O(p)}$. Thus to establish the ${j=1}$ claim it suffices to do so under the additional hypothesis that ${g = 1+O(p)}$.

First suppose that ${g = 1 + pX + O(p^2)}$ for some ${X \in M_2({\bf Z}_p)}$ with ${X^2=0 \hbox{ mod } p}$. By the ${j=0}$ case, we can find ${h \in H}$ of the form ${h = 1 + X + pY + O(p^2)}$ for some ${Y \in M_2({\bf Z}_p)}$. Raising to the ${p^{th}}$ power and using ${X^2=0}$ and ${p \geq 5 > 3}$, we note that

$\displaystyle h^p = 1 + \binom{p}{1} X + \binom{p}{1} pY + \binom{p}{2} X pY + \binom{p}{2} pY X$

$\displaystyle + \binom{p}{3} X pY X + O(p^2)$

$\displaystyle = 1 + pX + O(p^2),$

giving the claim in this case.

Any ${2 \times 2}$ matrix of trace zero with coefficients in ${{\mathbf F}_p}$ is a linear combination of ${\begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix}}$, ${\begin{pmatrix} 0 & 0 \\ 1 & 0 \end{pmatrix}}$, ${\begin{pmatrix} 1 & 1 \\ -1 & -1 \end{pmatrix}}$ and is thus a sum of matrices that square to zero. Hence, if ${g \in SL_2({\bf Z}_p)}$ is of the form ${g = 1 + O(p)}$, then ${g = 1 + pY + O(p^2)}$ for some matrix ${Y}$ of trace zero, and thus one can write ${g}$ (up to ${O(p^2)}$ errors) as the finite product of matrices of the form ${1 + pY + O(p^2)}$ with ${Y^2=0}$. By the previous arguments, such a matrix ${1+pY + O(p^2)}$ lies in ${H}$ up to ${O(p^2)}$ errors, and hence ${g}$ does also. This completes the proof of the ${j=1}$ case.

Now suppose ${j \geq 2}$ and the claim has already been proven for ${j-1}$. Arguing as before, it suffices to close the induction under the additional hypothesis that ${g = 1 + O(p^j)}$, thus we may write ${g = 1 + p^j X + O(p^{j+1})}$. By induction hypothesis, we may find ${h \in H}$ with ${h = 1 + p^{j-1} X + O(p^j)}$. But then ${h^p = 1 + p^j X + O(p^{j+1})}$, and we are done. $\Box$

We note a generalisation of Lemma 3 that involves two groups ${G_1,G_2}$ rather than just one:

Lemma 6 Let ${H}$ be a subgroup of a product ${G_1 \times G_2}$ of two nilpotent groups ${G_1, G_2}$. Then exactly one of the following hold:
• (i) (${H}$ too small) There exists group homomorphisms ${\eta_1: G'_1 \rightarrow K}$, ${\eta_2: G_2 \rightarrow K}$ into an abelian group ${K = (K,+)}$, with ${\eta_2}$ non-trivial, such that ${\eta_1(h_1) + \eta_2(h_2)=0}$ for all ${(h_1,h_2) \in H}$, where ${G'_1 := \{ h_1: (h_1,h_2) \in H \hbox{ for some } h_2 \in G_2 \}}$ is the projection of ${H}$ to ${G_1}$.
• (ii) ${H = G'_1 \times G_2}$ for some subgroup ${G'_1}$ of ${G_2}$.

Proof: Consider the group ${\{ h_2 \in G_2: (1,h_2) \in H \}}$. This is a subgroup of ${G_2}$. If it is all of ${G_2}$, then ${H}$ must be a Cartesian product ${H = G'_1 \times G_2}$ and option (ii) holds. So suppose that this group is a proper subgroup of ${G_2}$. Applying Lemma 3, we obtain a non-trivial group homomorphism ${\eta_2: G_2 \rightarrow K}$ into an abelian group ${K = (K,+)}$ such that ${\eta(h_2)=0}$ whenever ${(1,h_2) \in H}$. For any ${h_1}$ in the projection ${G'_1}$ of ${H}$ to ${G_1}$, there is thus a unique quantity ${\eta_1(h_1) \in H}$ such that ${\eta_1(h_1) + \eta_2(h_2) = 0}$ whenever ${(h_1,h_2) \in H}$. One easily checks that ${\eta_1}$ is a homomorphism, so we are in option (i).

Finally, it is clear that (i) and (ii) cannot both hold, since (i) places a non-trivial constraint on the second component ${h_2}$ of an element ${(h_1,h_2) \in H}$ of ${H}$ for any fixed choice of ${h_1}$. $\Box$

We also note a similar variant of Lemma 5, which is Lemme 10 of this paper of Serre:

Lemma 7 Let ${p \geq 5}$ be a prime, and let ${H}$ be a closed subgroup of ${SL_2({\bf Z}_p) \times SL_2({\bf Z}_p)}$. Then exactly one of the following hold:
• (i) (${H}$ too small) There exists a proper subgroup ${H'}$ of ${SL_2({\mathbf F}_p) \times SL_2({\mathbf F}_p)}$ such that ${h \hbox{ mod } p \in H'}$ for all ${h \in H}$.
• (ii) ${H=SL_2({\bf Z}_p) \times SL_2({\bf Z}_p)}$.

Proof: As in the proof of Lemma 5, (i) and (ii) cannot both hold. Suppose that (i) does not hold, then for any ${g \in SL_2({\bf Z}_p)}$ there exists ${h_1 \in H}$ such that ${h_1 = (g+O(p), 1 + O(p))}$. Similarly, there exists ${h_0 \in H}$ with ${h_0 = (1+O(p), 1+O(p))}$. Taking commutators of ${h_1}$ and ${h_0}$, we can find ${h_2 \in H}$ with ${h_2 = (g+O(p), 1+O(p^2))}$. Continuing to take commutators with ${h_0}$ and extracting a limit (using compactness and the closed nature of ${H}$), we can find ${h_\infty \in H}$ with ${h_\infty = (g+O(p),1)}$. Thus, the closed subgroup ${\{ g \in SL_2({\bf Z}_p): (g,1) \in H \}}$ of ${SL_2({\bf Z}_p)}$ does not obey conclusion (i) of Lemma 5, and must therefore obey conclusion (ii); that is to say, ${H}$ contains ${SL_2({\bf Z}_p) \times \{1\}}$. Similarly ${H}$ contains ${\{1\} \times SL_2({\bf Z}_p)}$; multiplying, we end up in conclusion (ii). $\Box$

The most famous result of this type is of course the Goursat lemma, which we phrase here in a somewhat idiosyncratic manner to conform to the pattern of the other lemmas in this post:

Lemma 8 (Goursat lemma) Let ${H}$ be a subgroup of a product ${G_1 \times G_2}$ of two groups ${G_1, G_2}$. Then one of the following hold:
• (i) (${H}$ too small) ${H}$ is contained in ${G'_1 \times G'_2}$ for some subgroups ${G'_1}$, ${G'_2}$ of ${G_1, G_2}$ respectively, with either ${G'_1 \subsetneq G_1}$ or ${G'_2 \subsetneq G_2}$ (or both).
• (ii) (${H}$ too large) There exist normal subgroups ${N_1, N_2}$ of ${G_1, G_2}$ respectively, not both trivial, such that ${H = \pi^{-1}(H')}$ arises from a subgroup ${H'}$ of ${G_1/N_1 \times G_2/N_2}$, where ${\pi: G_1 \times G_2 \rightarrow G_1/N_1 \times G_2/N_2}$ is the quotient map.
• (iii) (Isomorphism) There is a group isomorphism ${\phi: G_1 \rightarrow G_2}$ such that ${H = \{ (g_1, \phi(g_1)): g_1 \in G_1\}}$ is the graph of ${\phi}$. In particular, ${G_1}$ and ${G_2}$ are isomorphic.

Here we almost have a trichotomy, because option (iii) is incompatible with both option (i) and option (ii). However, it is possible for options (i) and (ii) to simultaneously hold.

Proof: If either of the projections ${\pi_1: H \rightarrow G_1}$, ${\pi_2: H \rightarrow G_2}$ from ${H}$ to the factor groups ${G_1,G_2}$ (thus ${\pi_1(h_1,h_2)=h_1}$ and ${\pi_2(h_1,h_2)=h_2}$ fail to be surjective, then we are in option (i). Thus we may assume that these maps are surjective.

Next, if either of the maps ${\pi_1: H \rightarrow G_1}$, ${\pi_2: H \rightarrow G_2}$ fail to be injective, then at least one of the kernels ${N_1 \times \{1\} := \mathrm{ker} \pi_2}$, ${\{1\} \times N_2 := \mathrm{ker} \pi_1}$ is non-trivial. We can then descend down to the quotient ${G_1/N_1 \times G_2/N_2}$ and end up in option (ii).

The only remaining case is when the group homomorphisms ${\pi_1, \pi_2}$ are both bijections, hence are group isomorphisms. If we set ${\phi := \pi_2 \circ \pi_1^{-1}}$ we end up in case (iii). $\Box$

We can combine the Goursat lemma with Lemma 3 to obtain a variant:

Corollary 9 (Nilpotent Goursat lemma) Let ${H}$ be a subgroup of a product ${G_1 \times G_2}$ of two nilpotent groups ${G_1, G_2}$. Then one of the following hold:
• (i) (${H}$ too small) There exists ${i=1,2}$ and a non-trivial group homomorphism ${\eta_i: G_i \rightarrow K}$ such that ${\eta_i(h_i)=0}$ for all ${(h_1,h_2) \in H}$.
• (ii) (${H}$ too large) There exist normal subgroups ${N_1, N_2}$ of ${G_1, G_2}$ respectively, not both trivial, such that ${H = \pi^{-1}(H')}$ arises from a subgroup ${H'}$ of ${G_1/N_1 \times G_2/N_2}$.
• (iii) (Isomorphism) There is a group isomorphism ${\phi: G_1 \rightarrow G_2}$ such that ${H = \{ (g_1, \phi(g_1)): g_1 \in G_1\}}$ is the graph of ${\phi}$. In particular, ${G_1}$ and ${G_2}$ are isomorphic.

Proof: If Lemma 8(i) holds, then by applying Lemma 3 we arrive at our current option (i). The other options are unchanged from Lemma 8, giving the claim. $\Box$

Now we present a lemma involving three groups ${G_1,G_2,G_3}$ that is known in ergodic theory contexts as the “Furstenberg-Weiss argument”, as an argument of this type arose in this paper of Furstenberg and Weiss, though perhaps it also implicitly appears in other contexts also. It has the remarkable feature of being able to enforce the abelian nature of one of the groups once the other options of the lemma are excluded.

Lemma 10 (Furstenberg-Weiss lemma) Let ${H}$ be a subgroup of a product ${G_1 \times G_2 \times G_3}$ of three groups ${G_1, G_2, G_3}$. Then one of the following hold:
• (i) (${H}$ too small) There is some proper subgroup ${G'_3}$ of ${G_3}$ and some ${i=1,2}$ such that ${h_3 \in G'_3}$ whenever ${(h_1,h_2,h_3) \in H}$ and ${h_i = 1}$.
• (ii) (${H}$ too large) There exists a non-trivial normal subgroup ${N_3}$ of ${G_3}$ with ${G_3/N_3}$ abelian, such that ${H = \pi^{-1}(H')}$ arises from a subgroup ${H'}$ of ${G_1 \times G_2 \times G_3/N_3}$, where ${\pi: G_1 \times G_2 \times G_3 \rightarrow G_1 \times G_2 \times G_3/N_3}$ is the quotient map.
• (iii) ${G_3}$ is abelian.

Proof: If the group ${\{ h_3 \in G_3: (1,h_2,h_3) \in H \}}$ is a proper subgroup of ${G_3}$, then we are in option (i) (with ${i=1}$), so we may assume that

$\displaystyle \{ h_3 \in G_3: (1,h_2,h_3) \in H \} = G.$

Similarly we may assume that

$\displaystyle \{ h_3 \in G_3: (h_1,1,h_3) \in H \} = G.$

Now let ${g_3,g'_3}$ be any two elements of ${G}$. By the above assumptions, we can find ${h_1 \in G_1, h_2 \in G_2}$ such that

$\displaystyle (1, h_2, g_3) \in H$

and

$\displaystyle (h_1,1, g'_3) \in H.$

Taking commutators to eliminate the ${h_1,h_2}$ terms, we conclude that

$\displaystyle (1, 1, [g_3,g'_3]) \in H.$

Thus the group ${\{ h_3 \in G_3: (1,1,h_3) \in H \}}$ contains every commutator ${[g_3,g'_3]}$, and thus contains the entire group ${[G_3,G_3]}$ generated by these commutators. If ${G_3}$ fails to be abelian, then ${[G_3,G_3]}$ is a non-trivial normal subgroup of ${G_3}$, and ${H}$ now arises from ${G_1 \times G_2 \times G_3/[G_3,G_3]}$ in the obvious fashion, placing one in option (ii). Hence the only remaining case is when ${G_3}$ is abelian, giving us option (iii). $\Box$

As before, we can combine this with previous lemmas to obtain a variant in the nilpotent case:

Lemma 11 (Nilpotent Furstenberg-Weiss lemma) Let ${H}$ be a subgroup of a product ${G_1 \times G_2 \times G_3}$ of three nilpotent groups ${G_1, G_2, G_3}$. Then one of the following hold:
• (i) (${H}$ too small) There exists ${i=1,2}$ and group homomorphisms ${\eta_i: G'_i \rightarrow K}$, ${\eta_3: G_3 \rightarrow K}$ for some abelian group ${K = (K,+)}$, with ${\eta_3}$ non-trivial, such that ${\eta_i(h_i) + \eta_3(h_3) = 0}$ whenever ${(h_1,h_2,h_3) \in H}$, where ${G'_i}$ is the projection of ${H}$ to ${G_i}$.
• (ii) (${H}$ too large) There exists a non-trivial normal subgroup ${N_3}$ of ${G_3}$, such that ${H = \pi^{-1}(H')}$ arises from a subgroup ${H'}$ of ${G_1 \times G_2 \times G_3/N_3}$.
• (iii) ${G_3}$ is abelian.

Informally, this lemma asserts that if ${(h_1,h_2,h_3)}$ is a variable ranging in some subgroup ${G_1 \times G_2 \times G_3}$, then either (i) there is a non-trivial abelian equation that constrains ${h_3}$ in terms of either ${h_1}$ or ${h_2}$; (ii) ${h_3}$ is not fully determined by ${h_1}$ and ${h_2}$; or (iii) ${G_3}$ is abelian.

Proof: Applying Lemma 10, we are already done if conclusions (ii) or (iii) of that lemma hold, so suppose instead that conclusion (i) holds for say ${i=1}$. Then the group ${\{ (h_1,h_3) \in G_1 \times G_3: (h_1,h_2,h_3) \in H \hbox{ for some } h_2 \in G_2 \}}$ is not of the form ${G'_2 \times G_3}$, since it only contains those ${(1,h_3)}$ with ${h_3 \in G'_3}$. Applying Lemma 6, we obtain group homomorphisms ${\eta_1: G'_1 \rightarrow K}$, ${\eta_3: G_3 \rightarrow K}$ into an abelian group ${K= (K,+)}$, with ${\eta_3}$ non-trivial, such that ${\eta_1(h_1) + \eta_3(h_3) = 0}$ whenever ${(h_1,h_2,h_3) \in H}$, placing us in option (i). $\Box$

The Furstenberg-Weiss argument is often used (though not precisely in this form) to establish that certain key structure groups arising in ergodic theory are abelian; see for instance Proposition 6.3(1) of this paper of Host and Kra for an example.

One can get more structural control on ${H}$ in the Furstenberg-Weiss lemma in option (iii) if one also broadens options (i) and (ii):

Lemma 12 (Variant of Furstenberg-Weiss lemma) Let ${H}$ be a subgroup of a product ${G_1 \times G_2 \times G_3}$ of three groups ${G_1, G_2, G_3}$. Then one of the following hold:
• (i) (${H}$ too small) There is some proper subgroup ${G'_{ij}}$ of ${G_i \times G_j}$ for some ${1 \leq i < j \leq 3}$ such that ${(h_i,h_j) \in G'_{ij}}$ whenever ${(h_1,h_2,h_3) \in H}$. (In other words, the projection of ${H}$ to ${G_i \times G_j}$ is not surjective.)
• (ii) (${H}$ too large) There exists a normal ${N_1, N_2, N_3}$ of ${G_1, G_2, G_3}$ respectively, not all trivial, such that ${H = \pi^{-1}(H')}$ arises from a subgroup ${H'}$ of ${G_1/N_1 \times G_2/N_2 \times G_3/N_3}$, where ${\pi: G_1 \times G_2 \times G_3 \rightarrow G_1/N_1 \times G_2/N_2 \times G_3/N_3}$ is the quotient map.
• (iii) ${G_1,G_2,G_3}$ are abelian and isomorphic. Furthermore, there exist isomorphisms ${\phi_1: G_1 \rightarrow K}$, ${\phi_2: G_2 \rightarrow K}$, ${\phi_3: G_3 \rightarrow K}$ to an abelian group ${K = (K,+)}$ such that

$\displaystyle H = \{ (g_1,g_2,g_3) \in G_1 \times G_2 \times G_3: \phi(g_1) + \phi(g_2) + \phi(g_3) = 0 \}.$

The ability to encode an abelian additive relation in terms of group-theoretic properties is vaguely reminiscent of the group configuration theorem.

Proof: We apply Lemma 10. Option (i) of that lemma implies option (i) of the current lemma, and similarly for option (ii), so we may assume without loss of generality that ${G_3}$ is abelian. By permuting we may also assume that ${G_1,G_2}$ are abelian, and will use additive notation for these groups.

We may assume that the projections of ${H}$ to ${G_1 \times G_2}$ and ${G_3}$ are surjective, else we are in option (i). The group ${\{ g_3 \in G_3: (1,1,g_3) \in H\}}$ is then a normal subgroup of ${G_3}$; we may assume it is trivial, otherwise we can quotient it out and be in option (ii). Thus ${H}$ can be expressed as a graph ${\{ (h_1,h_2,\phi(h_1,h_2)): h_1 \in G_1, h_2 \in G_2\}}$ for some map ${\phi: G_1 \times G_2 \rightarrow G_3}$. As ${H}$ is a group, ${\phi}$ must be a homomorphism, and we can write it as ${\phi(h_1+h_2) = -\phi_1(h_1) - \phi_2(h_2)}$ for some homomorphisms ${\phi_1: G_1 \rightarrow G_3}$, ${\phi_2: G_2 \rightarrow G_3}$. Thus elements ${(h_1,h_2,h_3)}$ of ${H}$ obey the constraint ${\phi_1(h_1) + \phi_2(h_2) + h_3 = 0}$.

If ${\phi_1}$ or ${\phi_2}$ fails to be injective, then we can quotient out by their kernels and end up in option (ii). If ${\phi_1}$ fails to be surjective, then the projection of ${H}$ to ${G_2 \times G_3}$ also fails to be surjective (since for ${(h_1,h_2,h_3) \in H}$, ${\phi_2(h_2) + h_3}$ is now constrained to lie in the range of ${\phi_1}$) and we are in option (i). Similarly if ${\phi_2}$ fails to be surjective. Thus we may assume that the homomorphisms ${\phi_1,\phi_2}$ are bijective and thus group isomorphisms. Setting ${\phi_3}$ to the identity, we arrive at option (iii). $\Box$

Combining this lemma with Lemma 3, we obtain a nilpotent version:

Corollary 13 (Variant of nilpotent Furstenberg-Weiss lemma) Let ${H}$ be a subgroup of a product ${G_1 \times G_2 \times G_3}$ of three groups ${G_1, G_2, G_3}$. Then one of the following hold:
• (i) (${H}$ too small) There are homomorphisms ${\eta_i: G_i \rightarrow K}$, ${\eta_j: G_j \rightarrow K}$ to some abelian group ${K =(K,+)}$ for some ${1 \leq i < j \leq 3}$, with ${\eta_i, \eta_j}$ not both trivial, such that ${\eta_i(h_i) + \eta_j(h_j) = 0}$ whenever ${(h_1,h_2,h_3) \in H}$.
• (ii) (${H}$ too large) There exists a normal ${N_1, N_2, N_3}$ of ${G_1, G_2, G_3}$ respectively, not all trivial, such that ${H = \pi^{-1}(H')}$ arises from a subgroup ${H'}$ of ${G_1/N_1 \times G_2/N_2 \times G_3/N_3}$, where ${\pi: G_1 \times G_2 \times G_3 \rightarrow G_1/N_1 \times G_2/N_2 \times G_3/N_3}$ is the quotient map.
• (iii) ${G_1,G_2,G_3}$ are abelian and isomorphic. Furthermore, there exist isomorphisms ${\phi_1: G_1 \rightarrow K}$, ${\phi_2: G_2 \rightarrow K}$, ${\phi_3: G_3 \rightarrow K}$ to an abelian group ${K = (K,+)}$ such that

$\displaystyle H = \{ (g_1,g_2,g_3) \in G_1 \times G_2 \times G_3: \phi(g_1) + \phi(g_2) + \phi(g_3) = 0 \}.$

Here is another variant of the Furstenberg-Weiss lemma, attributed to Serre by Ribet (see Lemma 3.3):

Lemma 14 (Serre’s lemma) Let ${H}$ be a subgroup of a finite product ${G_1 \times \dots \times G_k}$ of groups ${G_1,\dots,G_k}$ with ${k \geq 2}$. Then one of the following hold:
• (i) (${H}$ too small) There is some proper subgroup ${G'_{ij}}$ of ${G_i \times G_j}$ for some ${1 \leq i < j \leq k}$ such that ${(h_i,h_j) \in G'_{ij}}$ whenever ${(h_1,\dots,h_k) \in H}$.
• (ii) (${H}$ too large) One has ${H = G_1 \times \dots \times G_k}$.
• (iii) One of the ${G_i}$ has a non-trivial abelian quotient ${G_i/N_i}$.

Proof: The claim is trivial for ${k=2}$ (and we don’t need (iii) in this case), so suppose that ${k \geq 3}$. We can assume that each ${G_i}$ is a perfect group, ${G_i = [G_i,G_i]}$, otherwise we can quotient out by the commutator and arrive in option (iii). Similarly, we may assume that all the projections of ${H}$ to ${G_i \times G_j}$, ${1 \leq i < j \leq k}$ are surjective, otherwise we are in option (i).

We now claim that for any ${1 \leq j < k}$ and any ${g_k \in G_k}$, one can find ${(h_1,\dots,h_k) \in H}$ with ${h_i=1}$ for ${1 \leq i \leq j}$ and ${h_k = g_k}$. For ${j=1}$ this follows from the surjectivity of the projection of ${H}$ to ${G_1 \times G_k}$. Now suppose inductively that ${1 < j < k}$ and the claim has already been proven for ${j-1}$. Since ${G_k}$ is perfect, it suffices to establish this claim for ${g_k}$ of the form ${g_k = [g'_k, g''_k]}$ for some ${g'_k, g''_k \in G_k}$. By induction hypothesis, we can find ${(h'_1,\dots,h'_k) \in H}$ with ${h'_i = 1}$ for ${1 \leq i < j}$ and ${h'_k = g'_k}$. By surjectivity of the projection of ${H}$ to ${G_j \times G_k}$, one can find ${(h''_1,\dots,h''_k) \in H}$ with ${h''_j = 1}$ and ${h''_k=g''_k}$. Taking commutators of these two elements, we obtain the claim.

Setting ${j = k-1}$, we conclude that ${H}$ contains ${1 \times \dots \times 1 \times G_k}$. Similarly for permutations. Multiplying these together we see that ${H}$ contains all of ${G_1 \times \dots \times G_k}$, and we are in option (ii). $\Box$

In a recent post I discussed how the Riemann zeta function ${\zeta}$ can be locally approximated by a polynomial, in the sense that for randomly chosen ${t \in [T,2T]}$ one has an approximation

$\displaystyle \zeta(\frac{1}{2} + it - \frac{2\pi i z}{\log T}) \approx P_t( e^{2\pi i z/N} ) \ \ \ \ \ (1)$

where ${N}$ grows slowly with ${T}$, and ${P_t}$ is a polynomial of degree ${N}$. Assuming the Riemann hypothesis (as we will throughout this post), the zeroes of ${P_t}$ should all lie on the unit circle, and one should then be able to write ${P_t}$ as a scalar multiple of the characteristic polynomial of (the inverse of) a unitary matrix ${U = U_t \in U(N)}$, which we normalise as

$\displaystyle P_t(Z) = \exp(A_t) \mathrm{det}(1 - ZU). \ \ \ \ \ (2)$

Here ${A_t}$ is some quantity depending on ${t}$. We view ${U}$ as a random element of ${U(N)}$; in the limit ${T \rightarrow \infty}$, the GUE hypothesis is equivalent to ${U}$ becoming equidistributed with respect to Haar measure on ${U(N)}$ (also known as the Circular Unitary Ensemble, CUE; it is to the unit circle what the Gaussian Unitary Ensemble (GUE) is on the real line). One can also view ${U}$ as analogous to the “geometric Frobenius” operator in the function field setting, though unfortunately it is difficult at present to make this analogy any more precise (due, among other things, to the lack of a sufficiently satisfactory theory of the “field of one element“).

Taking logarithmic derivatives of (2), we have

$\displaystyle -\frac{P'_t(Z)}{P_t(Z)} = \mathrm{tr}( U (1-ZU)^{-1} ) = \sum_{j=1}^\infty Z^{j-1} \mathrm{tr} U^j \ \ \ \ \ (3)$

and hence on taking logarithmic derivatives of (1) in the ${z}$ variable we (heuristically) have

$\displaystyle -\frac{2\pi i}{\log T} \frac{\zeta'}{\zeta}( \frac{1}{2} + it - \frac{2\pi i z}{\log T}) \approx \frac{2\pi i}{N} \sum_{j=1}^\infty e^{2\pi i jz/N} \mathrm{tr} U^j.$

Morally speaking, we have

$\displaystyle - \frac{\zeta'}{\zeta}( \frac{1}{2} + it - \frac{2\pi i z}{\log T}) = \sum_{n=1}^\infty \frac{\Lambda(n)}{n^{1/2+it}} e^{2\pi i z (\log n/\log T)}$

so on comparing coefficients we expect to interpret the moments ${\mathrm{tr} U^j}$ of ${U}$ as a finite Dirichlet series:

$\displaystyle \mathrm{tr} U^j \approx \frac{N}{\log T} \sum_{T^{(j-1)/N} < n \leq T^{j/N}} \frac{\Lambda(n)}{n^{1/2+it}}. \ \ \ \ \ (4)$

To understand the distribution of ${U}$ in the unitary group ${U(N)}$, it suffices to understand the distribution of the moments

$\displaystyle {\bf E}_t \prod_{j=1}^k (\mathrm{tr} U^j)^{a_j} (\overline{\mathrm{tr} U^j})^{b_j} \ \ \ \ \ (5)$

where ${{\bf E}_t}$ denotes averaging over ${t \in [T,2T]}$, and ${k, a_1,\dots,a_k, b_1,\dots,b_k \geq 0}$. The GUE hypothesis asserts that in the limit ${T \rightarrow \infty}$, these moments converge to their CUE counterparts

$\displaystyle {\bf E}_{\mathrm{CUE}} \prod_{j=1}^k (\mathrm{tr} U^j)^{a_j} (\overline{\mathrm{tr} U^j})^{b_j} \ \ \ \ \ (6)$

where ${U}$ is now drawn uniformly in ${U(n)}$ with respect to the CUE ensemble, and ${{\bf E}_{\mathrm{CUE}}}$ denotes expectation with respect to that measure.

The moment (6) vanishes unless one has the homogeneity condition

$\displaystyle \sum_{j=1}^k j a_j = \sum_{j=1}^k j b_j. \ \ \ \ \ (7)$

This follows from the fact that for any phase ${\theta \in {\bf R}}$, ${e(\theta) U}$ has the same distribution as ${U}$, where we use the number theory notation ${e(\theta) := e^{2\pi i\theta}}$.

In the case when the degree ${\sum_{j=1}^k j a_j}$ is low, we can use representation theory to establish the following simple formula for the moment (6), as evaluated by Diaconis and Shahshahani:

Proposition 1 (Low moments in CUE model) If

$\displaystyle \sum_{j=1}^k j a_j \leq N, \ \ \ \ \ (8)$

then the moment (6) vanishes unless ${a_j=b_j}$ for all ${j}$, in which case it is equal to

$\displaystyle \prod_{j=1}^k j^{a_j} a_j!. \ \ \ \ \ (9)$

Another way of viewing this proposition is that for ${U}$ distributed according to CUE, the random variables ${\mathrm{tr} U^j}$ are distributed like independent complex random variables of mean zero and variance ${j}$, as long as one only considers moments obeying (8). This identity definitely breaks down for larger values of ${a_j}$, so one only obtains central limit theorems in certain limiting regimes, notably when one only considers a fixed number of ${j}$‘s and lets ${N}$ go to infinity. (The paper of Diaconis and Shahshahani writes ${\sum_{j=1}^k a_j + b_j}$ in place of ${\sum_{j=1}^k j a_j}$, but I believe this to be a typo.)

Proof: Let ${D}$ be the left-hand side of (8). We may assume that (7) holds since we are done otherwise, hence

$\displaystyle D = \sum_{j=1}^k j a_j = \sum_{j=1}^k j b_j.$

Our starting point is Schur-Weyl duality. Namely, we consider the ${n^D}$-dimensional complex vector space

$\displaystyle ({\bf C}^n)^{\otimes D} = {\bf C}^n \otimes \dots \otimes {\bf C}^n.$

This space has an action of the product group ${S_D \times GL_n({\bf C})}$: the symmetric group ${S_D}$ acts by permutation on the ${D}$ tensor factors, while the general linear group ${GL_n({\bf C})}$ acts diagonally on the ${{\bf C}^n}$ factors, and the two actions commute with each other. Schur-Weyl duality gives a decomposition

$\displaystyle ({\bf C}^n)^{\otimes D} \equiv \bigoplus_\lambda V^\lambda_{S_D} \otimes V^\lambda_{GL_n({\bf C})} \ \ \ \ \ (10)$

where ${\lambda}$ ranges over Young tableaux of size ${D}$ with at most ${n}$ rows, ${V^\lambda_{S_D}}$ is the ${S_D}$-irreducible unitary representation corresponding to ${\lambda}$ (which can be constructed for instance using Specht modules), and ${V^\lambda_{GL_n({\bf C})}}$ is the ${GL_n({\bf C})}$-irreducible polynomial representation corresponding with highest weight ${\lambda}$.

Let ${\pi \in S_D}$ be a permutation consisting of ${a_j}$ cycles of length ${j}$ (this is uniquely determined up to conjugation), and let ${g \in GL_n({\bf C})}$. The pair ${(\pi,g)}$ then acts on ${({\bf C}^n)^{\otimes D}}$, with the action on basis elements ${e_{i_1} \otimes \dots \otimes e_{i_D}}$ given by

$\displaystyle g e_{\pi(i_1)} \otimes \dots \otimes g_{\pi(i_D)}.$

The trace of this action can then be computed as

$\displaystyle \sum_{i_1,\dots,i_D \in \{1,\dots,n\}} g_{\pi(i_1),i_1} \dots g_{\pi(i_D),i_D}$

where ${g_{i,j}}$ is the ${ij}$ matrix coefficient of ${g}$. Breaking up into cycles and summing, this is just

$\displaystyle \prod_{j=1}^k \mathrm{tr}(g^j)^{a_j}.$

But we can also compute this trace using the Schur-Weyl decomposition (10), yielding the identity

$\displaystyle \prod_{j=1}^k \mathrm{tr}(g^j)^{a_j} = \sum_\lambda \chi_\lambda(\pi) s_\lambda(g) \ \ \ \ \ (11)$

where ${\chi_\lambda: S_D \rightarrow {\bf C}}$ is the character on ${S_D}$ associated to ${V^\lambda_{S_D}}$, and ${s_\lambda: GL_n({\bf C}) \rightarrow {\bf C}}$ is the character on ${GL_n({\bf C})}$ associated to ${V^\lambda_{GL_n({\bf C})}}$. As is well known, ${s_\lambda(g)}$ is just the Schur polynomial of weight ${\lambda}$ applied to the (algebraic, generalised) eigenvalues of ${g}$. We can specialise to unitary matrices to conclude that

$\displaystyle \prod_{j=1}^k \mathrm{tr}(U^j)^{a_j} = \sum_\lambda \chi_\lambda(\pi) s_\lambda(U)$

and similarly

$\displaystyle \prod_{j=1}^k \mathrm{tr}(U^j)^{b_j} = \sum_\lambda \chi_\lambda(\pi') s_\lambda(U)$

where ${\pi' \in S_D}$ consists of ${b_j}$ cycles of length ${j}$ for each ${j=1,\dots,k}$. On the other hand, the characters ${s_\lambda}$ are an orthonormal system on ${L^2(U(N))}$ with the CUE measure. Thus we can write the expectation (6) as

$\displaystyle \sum_\lambda \chi_\lambda(\pi) \overline{\chi_\lambda(\pi')}. \ \ \ \ \ (12)$

Now recall that ${\lambda}$ ranges over all the Young tableaux of size ${D}$ with at most ${N}$ rows. But by (8) we have ${D \leq N}$, and so the condition of having ${N}$ rows is redundant. Hence ${\lambda}$ now ranges over all Young tableaux of size ${D}$, which as is well known enumerates all the irreducible representations of ${S_D}$. One can then use the standard orthogonality properties of characters to show that the sum (12) vanishes if ${\pi}$, ${\pi'}$ are not conjugate, and is equal to ${D!}$ divided by the size of the conjugacy class of ${\pi}$ (or equivalently, by the size of the centraliser of ${\pi}$) otherwise. But the latter expression is easily computed to be ${\prod_{j=1}^k j^{a_j} a_j!}$, giving the claim. $\Box$

Example 2 We illustrate the identity (11) when ${D=3}$, ${n \geq 3}$. The Schur polynomials are given as

$\displaystyle s_{3}(g) = \sum_i \lambda_i^3 + \sum_{i

$\displaystyle s_{2,1}(g) = \sum_{i < j} \lambda_i^2 \lambda_j + \sum_{i < j,k} \lambda_i \lambda_j \lambda_k$

$\displaystyle s_{1,1,1}(g) = \sum_{i

where ${\lambda_1,\dots,\lambda_n}$ are the (generalised) eigenvalues of ${g}$, and the formula (11) in this case becomes

$\displaystyle \mathrm{tr}(g^3) = s_{3}(g) - s_{2,1}(g) + s_{1,1,1}(g)$

$\displaystyle \mathrm{tr}(g^2) \mathrm{tr}(g) = s_{3}(g) - s_{1,1,1}(g)$

$\displaystyle \mathrm{tr}(g)^3 = s_{3}(g) + 2 s_{2,1}(g) + s_{1,1,1}(g).$

The functions ${s_{1,1,1}, s_{2,1}, s_3}$ are orthonormal on ${U(n)}$, so the three functions ${\mathrm{tr}(g^3), \mathrm{tr}(g^2) \mathrm{tr}(g), \mathrm{tr}(g)^3}$ are also, and their ${L^2}$ norms are ${\sqrt{3}}$, ${\sqrt{2}}$, and ${\sqrt{6}}$ respectively, reflecting the size in ${S_3}$ of the centralisers of the permutations ${(123)}$, ${(12)}$, and ${\mathrm{id}}$ respectively. If ${n}$ is instead set to say ${2}$, then the ${s_{1,1,1}}$ terms now disappear (the Young tableau here has too many rows), and the three quantities here now have some non-trivial covariance.

Example 3 Consider the moment ${{\bf E}_{\mathrm{CUE}} |\mathrm{tr} U^j|^2}$. For ${j \leq N}$, the above proposition shows us that this moment is equal to ${D}$. What happens for ${j>N}$? The formula (12) computes this moment as

$\displaystyle \sum_\lambda |\chi_\lambda(\pi)|^2$

where ${\pi}$ is a cycle of length ${j}$ in ${S_j}$, and ${\lambda}$ ranges over all Young tableaux with size ${j}$ and at most ${N}$ rows. The Murnaghan-Nakayama rule tells us that ${\chi_\lambda(\pi)}$ vanishes unless ${\lambda}$ is a hook (all but one of the non-zero rows consisting of just a single box; this also can be interpreted as an exterior power representation on the space ${{\bf C}^j_{\sum=0}}$ of vectors in ${{\bf C}^j}$ whose coordinates sum to zero), in which case it is equal to ${\pm 1}$ (depending on the parity of the number of non-zero rows). As such we see that this moment is equal to ${N}$. Thus in general we have

$\displaystyle {\bf E}_{\mathrm{CUE}} |\mathrm{tr} U^j|^2 = \min(j,N). \ \ \ \ \ (13)$

Now we discuss what is known for the analogous moments (5). Here we shall be rather non-rigorous, in particular ignoring an annoying “Archimedean” issue that the product of the ranges ${T^{(j-1)/N} < n \leq T^{j/N}}$ and ${T^{(k-1)/N} < n \leq T^{k/N}}$ is not quite the range ${T^{(j+k-1)/N} < n \leq T^{j+k/N}}$ but instead leaks into the adjacent range ${T^{(j+k-2)/N} < n \leq T^{j+k-1/N}}$. This issue can be addressed by working in a “weak" sense in which parameters such as ${j,k}$ are averaged over fairly long scales, or by passing to a function field analogue of these questions, but we shall simply ignore the issue completely and work at a heuristic level only. For similar reasons we will ignore some technical issues arising from the sharp cutoff of ${t}$ to the range ${[T,2T]}$ (it would be slightly better technically to use a smooth cutoff).

One can morally expand out (5) using (4) as

$\displaystyle (\frac{N}{\log T})^{J+K} \sum_{n_1,\dots,n_J,m_1,\dots,m_K} \frac{\Lambda(n_1) \dots \Lambda(n_J) \Lambda(m_1) \dots \Lambda(m_K)}{n_1^{1/2} \dots n_J^{1/2} m_1^{1/2} \dots m_K^{1/2}} \times \ \ \ \ \ (14)$

$\displaystyle \times {\bf E}_t (m_1 \dots m_K / n_1 \dots n_J)^{it}$

where ${J := \sum_{j=1}^k a_j}$, ${K := \sum_{j=1}^k b_j}$, and the integers ${n_i,m_i}$ are in the ranges

$\displaystyle T^{(j-1)/N} < n_{a_1 + \dots + a_{j-1} + i} \leq T^{j/N}$

for ${j=1,\dots,k}$ and ${1 \leq i \leq a_j}$, and

$\displaystyle T^{(j-1)/N} < m_{b_1 + \dots + b_{j-1} + i} \leq T^{j/N}$

for ${j=1,\dots,k}$ and ${1 \leq i \leq b_j}$. Morally, the expectation here is negligible unless

$\displaystyle m_1 \dots m_K = (1 + O(1/T)) n_1 \dots n_J \ \ \ \ \ (15)$

in which case the expecation is oscillates with magnitude one. In particular, if (7) fails (with some room to spare) then the moment (5) should be negligible, which is consistent with the analogous behaviour for the moments (6). Now suppose that (8) holds (with some room to spare). Then ${n_1 \dots n_J}$ is significantly less than ${T}$, so the ${O(1/T)}$ multiplicative error in (15) becomes an additive error of ${o(1)}$. On the other hand, because of the fundamental integrality gap – that the integers are always separated from each other by a distance of at least ${1}$ – this forces the integers ${m_1 \dots m_K}$, ${n_1 \dots n_J}$ to in fact be equal:

$\displaystyle m_1 \dots m_K = n_1 \dots n_J. \ \ \ \ \ (16)$

The von Mangoldt factors ${\Lambda(n_1) \dots \Lambda(n_J) \Lambda(m_1) \dots \Lambda(m_K)}$ effectively restrict ${n_1,\dots,n_J,m_1,\dots,m_K}$ to be prime (the effect of prime powers is negligible). By the fundamental theorem of arithmetic, the constraint (16) then forces ${J=K}$, and ${n_1,\dots,n_J}$ to be a permutation of ${m_1,\dots,m_K}$, which then forces ${a_j = b_j}$ for all ${j=1,\dots,k}$._ For a given ${n_1,\dots,n_J}$, the number of possible ${m_1 \dots m_K}$ is then ${\prod_{j=1}^k a_j!}$, and the expectation in (14) is equal to ${1}$. Thus this expectation is morally

$\displaystyle (\frac{N}{\log T})^{J+K} \sum_{n_1,\dots,n_J} \frac{\Lambda^2(n_1) \dots \Lambda^2(n_J) }{n_1 \dots n_J} \prod_{j=1}^k a_j!$

and using Mertens’ theorem this soon simplifies asymptotically to the same quantity in Proposition 1. Thus we see that (morally at least) the moments (5) associated to the zeta function asymptotically match the moments (6) coming from the CUE model in the low degree case (8), thus lending support to the GUE hypothesis. (These observations are basically due to Rudnick and Sarnak, with the degree ${1}$ case of pair correlations due to Montgomery, and the degree ${2}$ case due to Hejhal.)

With some rare exceptions (such as those estimates coming from “Kloostermania”), the moment estimates of Rudnick and Sarnak basically represent the state of the art for what is known for the moments (5). For instance, Montgomery’s pair correlation conjecture, in our language, is basically the analogue of (13) for ${{\mathbf E}_t}$, thus

$\displaystyle {\bf E}_{t} |\mathrm{tr} U^j|^2 \approx \min(j,N) \ \ \ \ \ (17)$

for all ${j \geq 0}$. Montgomery showed this for (essentially) the range ${j \leq N}$ (as remarked above, this is a special case of the Rudnick-Sarnak result), but no further cases of this conjecture are known.

These estimates can be used to give some non-trivial information on the largest and smallest spacings between zeroes of the zeta function, which in our notation corresponds to spacing between eigenvalues of ${U}$. One such method used today for this is due to Montgomery and Odlyzko and was greatly simplified by Conrey, Ghosh, and Gonek. The basic idea, translated to our random matrix notation, is as follows. Suppose ${Q_t(Z)}$ is some random polynomial depending on ${t}$ of degree at most ${N}$. Let ${\lambda_1,\dots,\lambda_n}$ denote the eigenvalues of ${U}$, and let ${c > 0}$ be a parameter. Observe from the pigeonhole principle that if the quantity

$\displaystyle \sum_{j=1}^n \int_0^{c/N} |Q_t( e(\theta) \lambda_j )|^2\ d\theta \ \ \ \ \ (18)$

exceeds the quantity

$\displaystyle \int_{0}^{2\pi} |Q_t(e(\theta))|^2\ d\theta, \ \ \ \ \ (19)$

then the arcs ${\{ e(\theta) \lambda_j: 0 \leq \theta \leq c \}}$ cannot all be disjoint, and hence there exists a pair of eigenvalues making an angle of less than ${c/N}$ (${c}$ times the mean angle separation). Similarly, if the quantity (18) falls below that of (19), then these arcs cannot cover the unit circle, and hence there exists a pair of eigenvalues making an angle of greater than ${c}$ times the mean angle separation. By judiciously choosing the coefficients of ${Q_t}$ as functions of the moments ${\mathrm{tr}(U^j)}$, one can ensure that both quantities (18), (19) can be computed by the Rudnick-Sarnak estimates (or estimates of equivalent strength); indeed, from the residue theorem one can write (18) as

$\displaystyle \frac{1}{2\pi i} \int_0^{c/N} (\int_{|z| = 1+\varepsilon} - \int_{|z|=1-\varepsilon}) Q_t( e(\theta) z ) \overline{Q_t}( \frac{1}{e(\theta) z} ) \frac{P'_t(z)}{P_t(z)}\ dz$

for sufficiently small ${\varepsilon>0}$, and this can be computed (in principle, at least) using (3) if the coefficients of ${Q_t}$ are in an appropriate form. Using this sort of technology (translated back to the Riemann zeta function setting), one can show that gaps between consecutive zeroes of zeta are less than ${\mu}$ times the mean spacing and greater than ${\lambda}$ times the mean spacing infinitely often for certain ${0 < \mu < 1 < \lambda}$; the current records are ${\mu = 0.50412}$ (due to Goldston and Turnage-Butterbaugh) and ${\lambda = 3.18}$ (due to Bui and Milinovich, who input some additional estimates beyond the Rudnick-Sarnak set, namely the twisted fourth moment estimates of Bettin, Bui, Li, and Radziwill, and using a technique based on Hall’s method rather than the Montgomery-Odlyzko method).

It would be of great interest if one could push the upper bound ${\mu}$ for the smallest gap below ${1/2}$. The reason for this is that this would then exclude the Alternative Hypothesis that the spacing between zeroes are asymptotically always (or almost always) a non-zero half-integer multiple of the mean spacing, or in our language that the gaps between the phases ${\theta}$ of the eigenvalues ${e^{2\pi i\theta}}$ of ${U}$ are nasymptotically always non-zero integer multiples of ${1/2N}$. The significance of this hypothesis is that it is implied by the existence of a Siegel zero (of conductor a small power of ${T}$); see this paper of Conrey and Iwaniec. (In our language, what is going on is that if there is a Siegel zero in which ${L(1,\chi)}$ is very close to zero, then ${1*\chi}$ behaves like the Kronecker delta, and hence (by the Riemann-Siegel formula) the combined ${L}$-function ${\zeta(s) L(s,\chi)}$ will have a polynomial approximation which in our language looks like a scalar multiple of ${1 + e(\theta) Z^{2N+M}}$, where ${q \approx T^{M/N}}$ and ${\theta}$ is a phase. The zeroes of this approximation lie on a coset of the ${(2N+M)^{th}}$ roots of unity; the polynomial ${P}$ is a factor of this approximation and hence will also lie in this coset, implying in particular that all eigenvalue spacings are multiples of ${1/(2N+M)}$. Taking ${M = o(N)}$ then gives the claim.)

Unfortunately, the known methods do not seem to break this barrier without some significant new input; already the original paper of Montgomery and Odlyzko observed this limitation for their particular technique (and in fact fall very slightly short, as observed in unpublished work of Goldston and of Milinovich). In this post I would like to record another way to see this, by providing an “alternative” probability distribution to the CUE distribution (which one might dub the Alternative Circular Unitary Ensemble (ACUE) which is indistinguishable in low moments in the sense that the expectation ${{\bf E}_{ACUE}}$ for this model also obeys Proposition 1, but for which the phase spacings are always a multiple of ${1/2N}$. This shows that if one is to rule out the Alternative Hypothesis (and thus in particular rule out Siegel zeroes), one needs to input some additional moment information beyond Proposition 1. It would be interesting to see if any of the other known moment estimates that go beyond this proposition are consistent with this alternative distribution. (UPDATE: it looks like they are, see Remark 7 below.)

To describe this alternative distribution, let us first recall the Weyl description of the CUE measure on the unitary group ${U(n)}$ in terms of the distribution of the phases ${\theta_1,\dots,\theta_N \in {\bf R}/{\bf Z}}$ of the eigenvalues, randomly permuted in any order. This distribution is given by the probability measure

$\displaystyle \frac{1}{N!} |V(\theta)|^2\ d\theta_1 \dots d\theta_N; \ \ \ \ \ (20)$

where

$\displaystyle V(\theta) := \prod_{1 \leq i

is the Vandermonde determinant; see for instance this previous blog post for the derivation of a very similar formula for the GUE distribution, which can be adapted to CUE without much difficulty. To see that this is a probability measure, first observe the Vandermonde determinant identity

$\displaystyle V(\theta) = \sum_{\pi \in S_N} \mathrm{sgn}(\pi) e(\theta \cdot \pi(\rho))$

where ${\theta := (\theta_1,\dots,\theta_N)}$, ${\cdot}$ denotes the dot product, and ${\rho := (1,2,\dots,N)}$ is the “long word”, which implies that (20) is a trigonometric series with constant term ${1}$; it is also clearly non-negative, so it is a probability measure. One can thus generate a random CUE matrix by first drawing ${(\theta_1,\dots,\theta_n) \in ({\bf R}/{\bf Z})^N}$ using the probability measure (20), and then generating ${U}$ to be a random unitary matrix with eigenvalues ${e(\theta_1),\dots,e(\theta_N)}$.

For the alternative distribution, we first draw ${(\theta_1,\dots,\theta_N)}$ on the discrete torus ${(\frac{1}{2N}{\bf Z}/{\bf Z})^N}$ (thus each ${\theta_j}$ is a ${2N^{th}}$ root of unity) with probability density function

$\displaystyle \frac{1}{(2N)^N} \frac{1}{N!} |V(\theta)|^2 \ \ \ \ \ (21)$

shift by a phase ${\alpha \in {\bf R}/{\bf Z}}$ drawn uniformly at random, and then select ${U}$ to be a random unitary matrix with eigenvalues ${e^{i(\theta_1+\alpha)}, \dots, e^{i(\theta_N+\alpha)}}$. Let us first verify that (21) is a probability density function. Clearly it is non-negative. It is the linear combination of exponentials of the form ${e(\theta \cdot (\pi(\rho)-\pi'(\rho))}$ for ${\pi,\pi' \in S_N}$. The diagonal contribution ${\pi=\pi'}$ gives the constant function ${\frac{1}{(2N)^N}}$, which has total mass one. All of the other exponentials have a frequency ${\pi(\rho)-\pi'(\rho)}$ that is not a multiple of ${2N}$, and hence will have mean zero on ${(\frac{1}{2N}{\bf Z}/{\bf Z})^N}$. The claim follows.

From construction it is clear that the matrix ${U}$ drawn from this alternative distribution will have all eigenvalue phase spacings be a non-zero multiple of ${1/2N}$. Now we verify that the alternative distribution also obeys Proposition 1. The alternative distribution remains invariant under rotation by phases, so the claim is again clear when (8) fails. Inspecting the proof of that proposition, we see that it suffices to show that the Schur polynomials ${s_\lambda}$ with ${\lambda}$ of size at most ${N}$ and of equal size remain orthonormal with respect to the alternative measure. That is to say,

$\displaystyle \int_{U(N)} s_\lambda(U) \overline{s_{\lambda'}(U)}\ d\mu_{\mathrm{CUE}}(U) = \int_{U(N)} s_\lambda(U) \overline{s_{\lambda'}(U)}\ d\mu_{\mathrm{ACUE}}(U)$

when ${\lambda,\lambda'}$ have size equal to each other and at most ${N}$. In this case the phase ${\alpha}$ in the definition of ${U}$ is irrelevant. In terms of eigenvalue measures, we are then reduced to showing that

$\displaystyle \int_{({\bf R}/{\bf Z})^N} s_\lambda(\theta) \overline{s_{\lambda'}(\theta)} |V(\theta)|^2\ d\theta = \frac{1}{(2N)^N} \sum_{\theta \in (\frac{1}{2N}{\bf Z}/{\bf Z})^N} s_\lambda(\theta) \overline{s_{\lambda'}(\theta)} |V(\theta)|^2.$

By Fourier decomposition, it then suffices to show that the trigonometric polynomial ${s_\lambda(\theta) \overline{s_{\lambda'}(\theta)} |V(\theta)|^2}$ does not contain any components of the form ${e( \theta \cdot 2N k)}$ for some non-zero lattice vector ${k \in {\bf Z}^N}$. But we have already observed that ${|V(\theta)|^2}$ is a linear combination of plane waves of the form ${e(\theta \cdot (\pi(\rho)-\pi'(\rho))}$ for ${\pi,\pi' \in S_N}$. Also, as is well known, ${s_\lambda(\theta)}$ is a linear combination of plane waves ${e( \theta \cdot \kappa )}$ where ${\kappa}$ is majorised by ${\lambda}$, and similarly ${s_{\lambda'}(\theta)}$ is a linear combination of plane waves ${e( \theta \cdot \kappa' )}$ where ${\kappa'}$ is majorised by ${\lambda'}$. So the product ${s_\lambda(\theta) \overline{s_{\lambda'}(\theta)} |V(\theta)|^2}$ is a linear combination of plane waves of the form ${e(\theta \cdot (\kappa - \kappa' + \pi(\rho) - \pi'(\rho)))}$. But every coefficient of the vector ${\kappa - \kappa' + \pi(\rho) - \pi'(\rho)}$ lies between ${1-2N}$ and ${2N-1}$, and so cannot be of the form ${2Nk}$ for any non-zero lattice vector ${k}$, giving the claim.

Example 4 If ${N=2}$, then the distribution (21) assigns a probability of ${\frac{1}{4^2 2!} 2}$ to any pair ${(\theta_1,\theta_2) \in (\frac{1}{4} {\bf Z}/{\bf Z})^2}$ that is a permuted rotation of ${(0,\frac{1}{4})}$, and a probability of ${\frac{1}{4^2 2!} 4}$ to any pair that is a permuted rotation of ${(0,\frac{1}{2})}$. Thus, a matrix ${U}$ drawn from the alternative distribution will be conjugate to a phase rotation of ${\mathrm{diag}(1, i)}$ with probability ${1/2}$, and to ${\mathrm{diag}(1,-1)}$ with probability ${1/2}$.

A similar computation when ${N=3}$ gives ${U}$ conjugate to a phase rotation of ${\mathrm{diag}(1, e(1/6), e(1/3))}$ with probability ${1/12}$, to a phase rotation of ${\mathrm{diag}( 1, e(1/6), -1)}$ or its adjoint with probability of ${1/3}$ each, and a phase rotation of ${\mathrm{diag}(1, e(1/3), e(2/3))}$ with probability ${1/4}$.

Remark 5 For large ${N}$ it does not seem that this specific alternative distribution is the only distribution consistent with Proposition 1 and which has all phase spacings a non-zero multiple of ${1/2N}$; in particular, it may not be the only distribution consistent with a Siegel zero. Still, it is a very explicit distribution that might serve as a test case for the limitations of various arguments for controlling quantities such as the largest or smallest spacing between zeroes of zeta. The ACUE is in some sense the distribution that maximally resembles CUE (in the sense that it has the greatest number of Fourier coefficients agreeing) while still also being consistent with the Alternative Hypothesis, and so should be the most difficult enemy to eliminate if one wishes to disprove that hypothesis.

In some cases, even just a tiny improvement in known results would be able to exclude the alternative hypothesis. For instance, if the alternative hypothesis held, then ${|\mathrm{tr}(U^j)|}$ is periodic in ${j}$ with period ${2N}$, so from Proposition 1 for the alternative distribution one has

$\displaystyle {\bf E}_{\mathrm{ACUE}} |\mathrm{tr} U^j|^2 = \min_{k \in {\bf Z}} |j-2Nk|$

which differs from (13) for any ${|j| > N}$. (This fact was implicitly observed recently by Baluyot, in the original context of the zeta function.) Thus a verification of the pair correlation conjecture (17) for even a single ${j}$ with ${|j| > N}$ would rule out the alternative hypothesis. Unfortunately, such a verification appears to be on comparable difficulty with (an averaged version of) the Hardy-Littlewood conjecture, with power saving error term. (This is consistent with the fact that Siegel zeroes can cause distortions in the Hardy-Littlewood conjecture, as (implicitly) discussed in this previous blog post.)

Remark 6 One can view the CUE as normalised Lebesgue measure on ${U(N)}$ (viewed as a smooth submanifold of ${{\bf C}^{N^2}}$). One can similarly view ACUE as normalised Lebesgue measure on the (disconnected) smooth submanifold of ${U(N)}$ consisting of those unitary matrices whose phase spacings are non-zero integer multiples of ${1/2N}$; informally, ACUE is CUE restricted to this lower dimensional submanifold. As is well known, the phases of CUE eigenvalues form a determinantal point process with kernel ${K(\theta,\theta') = \frac{1}{N} \sum_{j=0}^{N-1} e(j(\theta - \theta'))}$ (or one can equivalently take ${K(\theta,\theta') = \frac{\sin(\pi N (\theta-\theta'))}{N\sin(\pi(\theta-\theta'))}}$; in a similar spirit, the phases of ACUE eigenvalues, once they are rotated to be ${2N^{th}}$ roots of unity, become a discrete determinantal point process on those roots of unity with exactly the same kernel (except for a normalising factor of ${\frac{1}{2}}$). In particular, the ${k}$-point correlation functions of ACUE (after this rotation) are precisely the restriction of the ${k}$-point correlation functions of CUE after normalisation, that is to say they are proportional to ${\mathrm{det}( K( \theta_i,\theta_j) )_{1 \leq i,j \leq k}}$.

Remark 7 One family of estimates that go beyond the Rudnick-Sarnak family of estimates are twisted moment estimates for the zeta function, such as ones that give asymptotics for

$\displaystyle \int_T^{2T} |\zeta(\frac{1}{2}+it)|^{2k} |Q(\frac{1}{2}+it)|^2\ dt$

for some small even exponent ${2k}$ (almost always ${2}$ or ${4}$) and some short Dirichlet polynomial ${Q}$; see for instance this paper of Bettin, Bui, Li, and Radziwill for some examples of such estimates. The analogous unitary matrix average would be something like

$\displaystyle {\bf E}_t |P_t(1)|^{2k} |Q_t(1)|^2$

where ${Q_t}$ is now some random medium degree polynomial that depends on the unitary matrix ${U}$ associated to ${P_t}$ (and in applications will typically also contain some negative power of ${\exp(A_t)}$ to cancel the corresponding powers of ${\exp(A_t)}$ in ${|P_t(1)|^{2k}}$). Unfortunately such averages generally are unable to distinguish the CUE from the ACUE. For instance, if all the coefficients of ${Q}$ involve products of traces ${\mathrm{tr}(U^k)}$ of total order less than ${N-k}$, then in terms of the eigenvalue phases ${\theta}$, ${|Q(1)|^2}$ is a linear combination of plane waves ${e(\theta \cdot \xi)}$ where the frequencies ${\xi}$ have coefficients of magnitude less than ${N-k}$. On the other hand, as each coefficient of ${P_t}$ is an elementary symmetric function of the eigenvalues, ${P_t(1)}$ is a linear combination of plane waves ${e(\theta \cdot \xi)}$ where the frequencies ${\xi}$ have coefficients of magnitude at most ${1}$. Thus ${|P_t(1)|^{2k} |Q_t(1)|^2}$ is a linear combination of plane waves where the frequencies ${\xi}$ have coefficients of magnitude less than ${N}$, and thus is orthogonal to the difference between the CUE and ACUE measures on the phase torus ${({\bf R}/{\bf Z})^n}$ by the previous arguments. In other words, ${|P_t(1)|^{2k} |Q_t(1)|^2}$ has the same expectation with respect to ACUE as it does with respect to CUE. Thus one can only start distinguishing CUE from ACUE if the mollifier ${Q_t}$ has degree close to or exceeding ${N}$, which corresponds to Dirichlet polynomials ${Q}$ of length close to or exceeding ${T}$, which is far beyond current technology for such moment estimates.

Remark 8 The GUE hypothesis for the zeta function asserts that the average

$\displaystyle \lim_{T \rightarrow \infty} \frac{1}{T} \int_T^{2T} \sum_{\gamma_1,\dots,\gamma_n \hbox{ distinct}} \eta( \frac{\log T}{2\pi}(\gamma_1-t),\dots, \frac{\log T}{2\pi}(\gamma_k-t))\ dt \ \ \ \ \ (22)$

is equal to

$\displaystyle \int_{{\bf R}^n} \eta(x) \det(K(x_i-x_j))_{1 \leq i,j \leq k}\ dx_1 \dots dx_k \ \ \ \ \ (23)$

for any ${k \geq 1}$ and any test function ${\eta: {\bf R}^k \rightarrow {\bf C}}$, where ${K(x) := \frac{\sin \pi x}{\pi x}}$ is the Dyson sine kernel and ${\gamma_i}$ are the ordinates of zeroes of the zeta function. This corresponds to the CUE distribution for ${U}$. The ACUE distribution then corresponds to an “alternative gaussian unitary ensemble (AGUE)” hypothesis, in which the average (22) is instead predicted to equal a Riemann sum version of the integral (23):

$\displaystyle \int_0^1 2^{-k} \sum_{x_1,\dots,x_k \in \frac{1}{2} {\bf Z} + \theta} \eta(x) \det(K(x_i-x_j))_{1 \leq i,j \leq k}\ d\theta.$

This is a stronger version of the alternative hypothesis that the spacing between adjacent zeroes is almost always approximately a half-integer multiple of the mean spacing. I do not know of any known moment estimates for Dirichlet series that is able to eliminate this AGUE hypothesis (even assuming GRH). (UPDATE: These facts have also been independently observed in forthcoming work of Lagarias and Rodgers.)

Let ${G = (G,+)}$, ${H = (H,+)}$ be additive groups (i.e., groups with an abelian addition group law). A map ${f: G \rightarrow H}$ is a homomorphism if one has

$\displaystyle f(x+y) - f(x) - f(y) = 0$

for all ${x,y \in G}$. A map ${f: G \rightarrow H}$ is an affine homomorphism if one has

$\displaystyle f(x_1) - f(x_2) + f(x_3) - f(x_4) = 0 \ \ \ \ \ (1)$

for all additive quadruples ${(x_1,x_2,x_3,x_4)}$ in ${G}$, by which we mean that ${x_1,x_2,x_3,x_4 \in G}$ and ${x_1-x_2+x_3-x_4=0}$. The two notions are closely related; it is easy to verify that ${f}$ is an affine homomorphism if and only if ${f}$ is the sum of a homomorphism and a constant.

Now suppose that ${H}$ also has a translation-invariant metric ${d}$. A map ${f: G \rightarrow H}$ is said to be a quasimorphism if one has

$\displaystyle f(x+y) - f(x) - f(y) = O(1) \ \ \ \ \ (2)$

for all ${x,y \in G}$, where ${O(1)}$ denotes a quantity at a bounded distance from the origin. Similarly, ${f: G \rightarrow H}$ is an affine quasimorphism if

$\displaystyle f(x_1) - f(x_2) + f(x_3) - f(x_4) = O(1) \ \ \ \ \ (3)$

for all additive quadruples ${(x_1,x_2,x_3,x_4)}$ in ${G}$. Again, one can check that ${f}$ is an affine quasimorphism if and only if it is the sum of a quasimorphism and a constant (with the implied constant of the quasimorphism controlled by the implied constant of the affine quasimorphism). (Since every constant is itself a quasimorphism, it is in fact the case that affine quasimorphisms are quasimorphisms, but now the implied constant in the latter is not controlled by the implied constant of the former.)

“Trivial” examples of quasimorphisms include the sum of a homomorphism and a bounded function. Are there others? In some cases, the answer is no. For instance, suppose we have a quasimorphism ${f: {\bf Z} \rightarrow {\bf R}}$. Iterating (2), we see that ${f(kx) = kf(x) + O(k)}$ for any integer ${x}$ and natural number ${k}$, which we can rewrite as ${f(kx)/kx = f(x)/x + O(1/|x|)}$ for non-zero ${x}$. Also, ${f}$ is Lipschitz. Sending ${k \rightarrow \infty}$, we can verify that ${f(x)/x}$ is a Cauchy sequence as ${x \rightarrow \infty}$ and thus tends to some limit ${\alpha}$; we have ${\alpha = f(x)/x + O(1/x)}$ for ${x \geq 1}$, hence ${f(x) = \alpha x + O(1)}$ for positive ${x}$, and then one can use (2) one last time to obtain ${f(x) = \alpha x + O(1)}$ for all ${x}$. Thus ${f}$ is the sum of the homomorphism ${x \mapsto \alpha x}$ and a bounded sequence.

In general, one can phrase this problem in the language of group cohomology (discussed in this previous post). Call a map ${f: G \rightarrow H}$ a ${0}$-cocycle. A ${1}$-cocycle is a map ${\rho: G \times G \rightarrow H}$ obeying the identity

$\displaystyle \rho(x,y+z) + \rho(y,z) = \rho(x,y) + \rho(x+y,z)$

for all ${x,y,z \in G}$. Given a ${0}$-cocycle ${f: G \rightarrow H}$, one can form its derivative ${\partial f: G \times G \rightarrow H}$ by the formula

$\displaystyle \partial f(x,y) := f(x+y)-f(x)-f(y).$

Such functions are called ${1}$-coboundaries. It is easy to see that the abelian group of ${1}$-coboundaries is a subgroup of the abelian group of ${1}$-cocycles. The quotient of these two groups is the first group cohomology of ${G}$ with coefficients in ${H}$, and is denoted ${H^1(G; H)}$.

If a ${0}$-cocycle is bounded then its derivative is a bounded ${1}$-coboundary. The quotient of the group of bounded ${1}$-cocycles by the derivatives of bounded ${0}$-cocycles is called the bounded first group cohomology of ${G}$ with coefficients in ${H}$, and is denoted ${H^1_b(G; H)}$. There is an obvious homomorphism ${\phi}$ from ${H^1_b(G; H)}$ to ${H^1(G; H)}$, formed by taking a coset of the space of derivatives of bounded ${0}$-cocycles, and enlarging it to a coset of the space of ${1}$-coboundaries. By chasing all the definitions, we see that all quasimorphism from ${G}$ to ${H}$ are the sum of a homomorphism and a bounded function if and only if this homomorphism ${\phi}$ is injective; in fact the quotient of the space of quasimorphisms by the sum of homomorphisms and bounded functions is isomorphic to the kernel of ${\phi}$.

In additive combinatorics, one is often working with functions which only have additive structure a fraction of the time, thus for instance (1) or (3) might only hold “${1\%}$ of the time”. This makes it somewhat difficult to directly interpret the situation in terms of group cohomology. However, thanks to tools such as the Balog-Szemerédi-Gowers lemma, one can upgrade this sort of ${1\%}$-structure to ${100\%}$-structure – at the cost of restricting the domain to a smaller set. Here I record one such instance of this phenomenon, thus giving a tentative link between additive combinatorics and group cohomology. (I thank Yuval Wigderson for suggesting the problem of locating such a link.)

Theorem 1 Let ${G = (G,+)}$, ${H = (H,+)}$ be additive groups with ${|G|=N}$, let ${S}$ be a subset of ${H}$, let ${E \subset G}$, and let ${f: E \rightarrow H}$ be a function such that

$\displaystyle f(x_1) - f(x_2) + f(x_3) - f(x_4) \in S$

for ${\geq K^{-1} N^3}$ additive quadruples ${(x_1,x_2,x_3,x_4)}$ in ${E}$. Then there exists a subset ${A}$ of ${G}$ containing ${0}$ with ${|A| \gg K^{-O(1)} N}$, a subset ${X}$ of ${H}$ with ${|X| \ll K^{O(1)}}$, and a function ${g: 4A-4A \rightarrow H}$ such that

$\displaystyle g(x+y) - g(x)-g(y) \in X + 496S - 496S \ \ \ \ \ (4)$

for all ${x, y \in 2A-2A}$ (thus, the derivative ${\partial g}$ takes values in ${X + 496 S - 496 S}$ on ${2A - 2A}$), and such that for each ${h \in A}$, one has

$\displaystyle f(x+h) - f(x) - g(h) \in 8S - 8S \ \ \ \ \ (5)$

for ${\gg K^{-O(1)} N}$ values of ${x \in E}$.

Presumably the constants ${8}$ and ${496}$ can be improved further, but we have not attempted to optimise these constants. We chose ${2A-2A}$ as the domain on which one has a bounded derivative, as one can use the Bogulybov lemma (see e.g, Proposition 4.39 of my book with Van Vu) to find a large Bohr set inside ${2A-2A}$. In applications, the set ${S}$ need not have bounded size, or even bounded doubling; for instance, in the inverse ${U^4}$ theory over a small finite fields ${F}$, one would be interested in the situation where ${H}$ is the group of ${n \times n}$ matrices with coefficients in ${F}$ (for some large ${n}$, and ${S}$ being the subset consisting of those matrices of rank bounded by some bound ${C = O(1)}$.

Proof: By hypothesis, there are ${\geq K N^3}$ triples ${(h,x,y) \in G^3}$ such that ${x,x+h,y,y+h \in E}$ and

$\displaystyle f(x+h) - f(x) \in f(y+h)-f(y) + S. \ \ \ \ \ (6)$

Thus, there is a set ${B \subset G}$ with ${|B| \gg K^{-1} N}$ such that for all ${h \in B}$, one has (6) for ${\gg K^{-1} N^2}$ pairs ${(x,y) \in G^2}$ with ${x,x+h,y,y+h \in E}$; in particular, there exists ${y = y(h) \in E \cap (E-h)}$ such that (6) holds for ${\gg K^{-1} N}$ values of ${x \in E \cap (E-h)}$. Setting ${g_0(h) := f(y(h)+h) - f(y(h))}$, we conclude that for each ${h \in B}$, one has

$\displaystyle f(x+h) - f(x) \in g_0(h) + S \ \ \ \ \ (7)$

for ${\gg K^{-1} N}$ values of ${x \in E \cap (E-h)}$.

Consider the bipartite graph whose vertex sets are two copies of ${E}$, and ${x}$ and ${x+h}$ connected by a (directed) edge if ${h \in B}$ and (7) holds. Then this graph has ${\gg K^{-2} N^2}$ edges. Applying (a slight modification of) the Balog-Szemerédi-Gowers theorem (for instance by modifying the proof of Corollary 5.19 of my book with Van Vu), we can then find a subset ${C}$ of ${E}$ with ${|C| \gg K^{-O(1)} N}$ with the property that for any ${x_1,x_3 \in C}$, there exist ${\gg K^{-O(1)} N^3}$ triples ${(x_2,y_1,y_2) \in E^3}$ such that the edges ${(x_1,y_1), (x_2,y_1), (x_2,y_2), (x_3,y_2)}$ all lie in this bipartite graph. This implies that, for all ${x_1,x_3 \in C}$, there exist ${\gg K^{-O(1)} N^7}$ septuples ${(x_2,y_1,y_2,z_{11},z_{21},z_{22},z_{32}) \in G^7}$ obeying the constraints

$\displaystyle f(y_j) - f(x_i), f(y_j+z_{ij}) - f(x_i+z_{ij}) \in g_0(y_j-x_i) + S$

and ${y_j, x_i, y_j+z_{ij}, x_i+z_{ij} \in E}$ for ${ij = 11, 21, 22, 32}$. These constraints imply in particular that

$\displaystyle f(x_3) - f(x_1) \in f(x_3+z_{32}) - f(y_2+z_{32}) + f(y_2+z_{22}) - f(x_2+z_{22}) + f(x_2+z_{21}) - f(y_1+z_{21}) + f(y_1+z_{11}) - f(x_1+z_{11}) + 4S - 4S.$

Also observe that

$\displaystyle x_3 - x_1 = (x_3+z_{32}) - (y_2+z_{32}) + (y_2+z_{22}) - (x_2+z_{22}) + (x_2+z_{21}) - (y_1+z_{21}) + (y_1+z_{11}) - (x_1+z_{11}).$

Thus, if ${h \in G}$ and ${x_3,x_1 \in C}$ are such that ${x_3-x_1 = h}$, we see that

$\displaystyle f(w_1) - f(w_2) + f(w_3) - f(w_4) + f(w_5) - f(w_6) + f(w_7) - f(w_8) \in f(x_3) - f(x_1) + 4S - 4S$

for ${\gg K^{-O(1)} N^7}$ octuples ${(w_1,w_2,w_3,w_4,w_5,w_6,w_7,w_8) \in E^8}$ in the hyperplane

$\displaystyle h = w_1 - w_2 + w_3 - w_4 + w_5 - w_6 + w_7 - w_8.$

By the pigeonhole principle, this implies that for any fixed ${h \in G}$, there can be at most ${O(K^{O(1)})}$ sets of the form ${f(x_3)-f(x_1) + 3S-3S}$ with ${x_3-x_1=h}$, ${x_1,x_3 \in C}$ that are pairwise disjoint. Using a greedy algorithm, we conclude that there is a set ${W_h}$ of cardinality ${O(K^{O(1)})}$, such that each set ${f(x_3) - f(x_1) + 3S-3S}$ with ${x_3-x_1=h}$, ${x_1,x_3 \in C}$ intersects ${w+4S -4S}$ for some ${w \in W_h}$, or in other words that

$\displaystyle f(x_3) - f(x_1) \in W_{x_3-x_1} + 8S-8S \ \ \ \ \ (8)$

whenever ${x_1,x_3 \in C}$. In particular,

$\displaystyle \sum_{h \in G} \sum_{w \in W_h} | \{ (x_1,x_3) \in C^2: x_3-x_1 = h; f(x_3) - f(x_1) \in w + 8S-8S \}| \geq |C|^2 \gg K^{-O(1)} N^2.$

This implies that there exists a subset ${A}$ of ${G}$ with ${|A| \gg K^{-O(1)} N}$, and an element ${g_1(h) \in W_h}$ for each ${h \in A}$, such that

$\displaystyle | \{ (x_1,x_3) \in C^2: x_3-x_1 = h; f(x_3) - f(x_1) \in g_1(h) + 8S-8S \}| \gg K^{-O(1)} N \ \ \ \ \ (9)$

for all ${h \in A}$. Note we may assume without loss of generality that ${0 \in A}$ and ${g_1(0)=0}$.

Suppose that ${h_1,\dots,h_{16} \in A}$ are such that

$\displaystyle \sum_{i=1}^{16} (-1)^{i-1} h_i = 0. \ \ \ \ \ (10)$

By construction of ${A}$, and permuting labels, we can find ${\gg K^{-O(1)} N^{16}}$ 16-tuples ${(x_1,\dots,x_{16},y_1,\dots,y_{16}) \in C^{32}}$ such that

$\displaystyle y_i - x_i = (-1)^{i-1} h_i$

and

$\displaystyle f(y_i) - f(x_i) \in (-1)^{i-1} g_i(h) + 8S - 8S$

for ${i=1,\dots,16}$. We sum this to obtain

$\displaystyle f(y_1) + \sum_{i=1}^{15} f(y_{i+1})-f(x_i) - f(x_8) \in \sum_{i=1}^{16} (-1)^{i-1} g_1(h_i) + 128 S - 128 S$

and hence by (8)

$\displaystyle f(y_1) - f(x_{16}) + \sum_{i=1}^{15} W_{k_i} \in \sum_{i=1}^{16} (-1)^{i-1} g_1(h_i) + 248 S - 248 S$

where ${k_i := y_{i+1}-x_i}$. Since

$\displaystyle y_1 - x_{16} + \sum_{i=1}^{15} k_i = 0$

we see that there are only ${N^{16}}$ possible values of ${(y_1,x_{16},k_1,\dots,k_{15})}$. By the pigeonhole principle, we conclude that at most ${O(K^{O(1)})}$ of the sets ${\sum_{i=1}^{16} (-1)^i g_1(h_i) + 248 S - 248 S}$ can be disjoint. Arguing as before, we conclude that there exists a set ${X}$ of cardinality ${O(K^{O(1)})}$ such that

$\displaystyle \sum_{i=1}^{16} (-1)^{i-1} g_1(h_i) \in X + 496 S - 496 S \ \ \ \ \ (11)$

whenever (10) holds.

For any ${h \in 4A-4A}$, write ${h}$ arbitrarily as ${h = \sum_{i=1}^8 (-1)^{i-1} h_i}$ for some ${h_1,\dots,h_8 \in A}$ (with ${h_5=\dots=h_8=0}$ if ${h \in 2A-2A}$, and ${h_2 = \dots = h_8 = 0}$ if ${h \in A}$) and then set

$\displaystyle g(h) := \sum_{i=1}^8 (-1)^i g_1(h_i).$

Then from (11) we have (4). For ${h \in A}$ we have ${g(h) = g_1(h)}$, and (5) then follows from (9). $\Box$

This is a sequel to this previous blog post, in which we discussed the effect of the heat flow evolution

$\displaystyle \partial_t P(t,z) = \partial_{zz} P(t,z)$

on the zeroes of a time-dependent family of polynomials ${z \mapsto P(t,z)}$, with a particular focus on the case when the polynomials ${z \mapsto P(t,z)}$ had real zeroes. Here (inspired by some discussions I had during a recent conference on the Riemann hypothesis in Bristol) we record the analogous theory in which the polynomials instead have zeroes on a circle ${\{ z: |z| = \sqrt{q} \}}$, with the heat flow slightly adjusted to compensate for this. As we shall discuss shortly, a key example of this situation arises when ${P}$ is the numerator of the zeta function of a curve.

More precisely, let ${g}$ be a natural number. We will say that a polynomial

$\displaystyle P(z) = \sum_{j=0}^{2g} a_j z^j$

of degree ${2g}$ (so that ${a_{2g} \neq 0}$) obeys the functional equation if the ${a_j}$ are all real and

$\displaystyle a_j = q^{g-j} a_{2g-j}$

for all ${j=0,\dots,2g}$, thus

$\displaystyle P(\overline{z}) = \overline{P(z)}$

and

$\displaystyle P(q/z) = q^g z^{-2g} P(z)$

for all non-zero ${z}$. This means that the ${2g}$ zeroes ${\alpha_1,\dots,\alpha_{2g}}$ of ${P(z)}$ (counting multiplicity) lie in ${{\bf C} \backslash \{0\}}$ and are symmetric with respect to complex conjugation ${z \mapsto \overline{z}}$ and inversion ${z \mapsto q/z}$ across the circle ${\{ |z| = \sqrt{q}\}}$. We say that this polynomial obeys the Riemann hypothesis if all of its zeroes actually lie on the circle ${\{ z = \sqrt{q}\}}$. For instance, in the ${g=1}$ case, the polynomial ${z^2 - a_1 z + q}$ obeys the Riemann hypothesis if and only if ${|a_1| \leq 2\sqrt{q}}$.

Such polynomials arise in number theory as follows: if ${C}$ is a projective curve of genus ${g}$ over a finite field ${\mathbf{F}_q}$, then, as famously proven by Weil, the associated local zeta function ${\zeta_{C,q}(z)}$ (as defined for instance in this previous blog post) is known to take the form

$\displaystyle \zeta_{C,q}(z) = \frac{P(z)}{(1-z)(1-qz)}$

where ${P}$ is a degree ${2g}$ polynomial obeying both the functional equation and the Riemann hypothesis. In the case that ${C}$ is an elliptic curve, then ${g=1}$ and ${P}$ takes the form ${P(z) = z^2 - a_1 z + q}$, where ${a_1}$ is the number of ${{\bf F}_q}$-points of ${C}$ minus ${q+1}$. The Riemann hypothesis in this case is a famous result of Hasse.

Another key example of such polynomials arise from rescaled characteristic polynomials

$\displaystyle P(z) := \det( 1 - \sqrt{q} F ) \ \ \ \ \ (1)$

of ${2g \times 2g}$ matrices ${F}$ in the compact symplectic group ${Sp(g)}$. These polynomials obey both the functional equation and the Riemann hypothesis. The Sato-Tate conjecture (in higher genus) asserts, roughly speaking, that “typical” polyomials ${P}$ arising from the number theoretic situation above are distributed like the rescaled characteristic polynomials (1), where ${F}$ is drawn uniformly from ${Sp(g)}$ with Haar measure.

Given a polynomial ${z \mapsto P(0,z)}$ of degree ${2g}$ with coefficients

$\displaystyle P(0,z) = \sum_{j=0}^{2g} a_j(0) z^j,$

we can evolve it in time by the formula

$\displaystyle P(t,z) = \sum_{j=0}^{2g} \exp( t(j-g)^2 ) a_j(0) z^j,$

thus ${a_j(t) = \exp(t(j-g)) a_j(0)}$ for ${t \in {\bf R}}$. Informally, as one increases ${t}$, this evolution accentuates the effect of the extreme monomials, particularly, ${z^0}$ and ${z^{2g}}$ at the expense of the intermediate monomials such as ${z^g}$, and conversely as one decreases ${t}$. This family of polynomials obeys the heat-type equation

$\displaystyle \partial_t P(t,z) = (z \partial_z - g)^2 P(t,z). \ \ \ \ \ (2)$

In view of the results of Marcus, Spielman, and Srivastava, it is also very likely that one can interpret this flow in terms of expected characteristic polynomials involving conjugation over the compact symplectic group ${Sp(n)}$, and should also be tied to some sort of “${\beta=\infty}$” version of Brownian motion on this group, but we have not attempted to work this connection out in detail.

It is clear that if ${z \mapsto P(0,z)}$ obeys the functional equation, then so does ${z \mapsto P(t,z)}$ for any other time ${t}$. Now we investigate the evolution of the zeroes. Suppose at some time ${t_0}$ that the zeroes ${\alpha_1(t_0),\dots,\alpha_{2g}(t_0)}$ of ${z \mapsto P(t_0,z)}$ are distinct, then

$\displaystyle P(t_0,z) = a_{2g}(0) \exp( t_0g^2 ) \prod_{j=1}^{2g} (z - \alpha_j(t_0) ).$

From the inverse function theorem we see that for times ${t}$ sufficiently close to ${t_0}$, the zeroes ${\alpha_1(t),\dots,\alpha_{2g}(t)}$ of ${z \mapsto P(t,z)}$ continue to be distinct (and vary smoothly in ${t}$), with

$\displaystyle P(t,z) = a_{2g}(0) \exp( t g^2 ) \prod_{j=1}^{2g} (z - \alpha_j(t) ).$

Differentiating this at any ${z}$ not equal to any of the ${\alpha_j(t)}$, we obtain

$\displaystyle \partial_t P(t,z) = P(t,z) ( g^2 - \sum_{j=1}^{2g} \frac{\alpha'_j(t)}{z - \alpha_j(t)})$

and

$\displaystyle \partial_z P(t,z) = P(t,z) ( \sum_{j=1}^{2g} \frac{1}{z - \alpha_j(t)})$

and

$\displaystyle \partial_{zz} P(t,z) = P(t,z) ( \sum_{1 \leq j,k \leq 2g: j \neq k} \frac{1}{(z - \alpha_j(t))(z - \alpha_k(t))}).$

Inserting these formulae into (2) (expanding ${(z \partial_z - g)^2}$ as ${z^2 \partial_{zz} - (2g-1) z \partial_z + g^2}$) and canceling some terms, we conclude that

$\displaystyle - \sum_{j=1}^{2g} \frac{\alpha'_j(t)}{z - \alpha_j(t)} = z^2 \sum_{1 \leq j,k \leq 2g: j \neq k} \frac{1}{(z - \alpha_j(t))(z - \alpha_k(t))}$

$\displaystyle - (2g-1) z \sum_{j=1}^{2g} \frac{1}{z - \alpha_j(t)}$

for ${t}$ sufficiently close to ${t_0}$, and ${z}$ not equal to ${\alpha_1(t),\dots,\alpha_{2g}(t)}$. Extracting the residue at ${z = \alpha_j(t)}$, we conclude that

$\displaystyle - \alpha'_j(t) = 2 \alpha_j(t)^2 \sum_{1 \leq k \leq 2g: k \neq j} \frac{1}{\alpha_j(t) - \alpha_k(t)} - (2g-1) \alpha_j(t)$

which we can rearrange as

$\displaystyle \frac{\alpha'_j(t)}{\alpha_j(t)} = - \sum_{1 \leq k \leq 2g: k \neq j} \frac{\alpha_j(t)+\alpha_k(t)}{\alpha_j(t)-\alpha_k(t)}.$

If we make the change of variables ${\alpha_j(t) = \sqrt{q} e^{i\theta_j(t)}}$ (noting that one can make ${\theta_j}$ depend smoothly on ${t}$ for ${t}$ sufficiently close to ${t_0}$), this becomes

$\displaystyle \partial_t \theta_j(t) = \sum_{1 \leq k \leq 2g: k \neq j} \cot \frac{\theta_j(t) - \theta_k(t)}{2}. \ \ \ \ \ (3)$

Intuitively, this equation asserts that the phases ${\theta_j}$ repel each other if they are real (and attract each other if their difference is imaginary). If ${z \mapsto P(t_0,z)}$ obeys the Riemann hypothesis, then the ${\theta_j}$ are all real at time ${t_0}$, then the Picard uniqueness theorem (applied to ${\theta_j(t)}$ and its complex conjugate) then shows that the ${\theta_j}$ are also real for ${t}$ sufficiently close to ${t_0}$. If we then define the entropy functional

$\displaystyle H(\theta_1,\dots,\theta_{2g}) := \sum_{1 \leq j < k \leq 2g} \log \frac{1}{|\sin \frac{\theta_j-\theta_k}{2}| }$

then the above equation becomes a gradient flow

$\displaystyle \partial_t \theta_j(t) = - 2 \frac{\partial H}{\partial \theta_j}( \theta_1(t),\dots,\theta_{2g}(t) )$

which implies in particular that ${H(\theta_1(t),\dots,\theta_{2g}(t))}$ is non-increasing in time. This shows that as one evolves time forward from ${t_0}$, there is a uniform lower bound on the separation between the phases ${\theta_1(t),\dots,\theta_{2g}(t)}$, and hence the equation can be solved indefinitely; in particular, ${z \mapsto P(t,z)}$ obeys the Riemann hypothesis for all ${t > t_0}$ if it does so at time ${t_0}$. Our argument here assumed that the zeroes of ${z \mapsto P(t_0,z)}$ were simple, but this assumption can be removed by the usual limiting argument.

For any polynomial ${z \mapsto P(0,z)}$ obeying the functional equation, the rescaled polynomials ${z \mapsto e^{-g^2 t} P(t,z)}$ converge locally uniformly to ${a_{2g}(0) (z^{2g} + q^g)}$ as ${t \rightarrow +\infty}$. By Rouche’s theorem, we conclude that the zeroes of ${z \mapsto P(t,z)}$ converge to the equally spaced points ${\{ e^{2\pi i(j+1/2)/2g}: j=1,\dots,2g\}}$ on the circle ${\{ |z| = \sqrt{q}\}}$. Together with the symmetry properties of the zeroes, this implies in particular that ${z \mapsto P(t,z)}$ obeys the Riemann hypothesis for all sufficiently large positive ${t}$. In the opposite direction, when ${t \rightarrow -\infty}$, the polynomials ${z \mapsto P(t,z)}$ converge locally uniformly to ${a_g(0) z^g}$, so if ${a_g(0) \neq 0}$, ${g}$ of the zeroes converge to the origin and the other ${g}$ converge to infinity. In particular, ${z \mapsto P(t,z)}$ fails the Riemann hypothesis for sufficiently large negative ${t}$. Thus (if ${a_g(0) \neq 0}$), there must exist a real number ${\Lambda}$, which we call the de Bruijn-Newman constant of the original polynomial ${z \mapsto P(0,z)}$, such that ${z \mapsto P(t,z)}$ obeys the Riemann hypothesis for ${t \geq \Lambda}$ and fails the Riemann hypothesis for ${t < \Lambda}$. The situation is a bit more complicated if ${a_g(0)}$ vanishes; if ${k}$ is the first natural number such that ${a_{g+k}(0)}$ (or equivalently, ${a_{g-j}(0)}$) does not vanish, then by the above arguments one finds in the limit ${t \rightarrow -\infty}$ that ${g-k}$ of the zeroes go to the origin, ${g-k}$ go to infinity, and the remaining ${2k}$ zeroes converge to the equally spaced points ${\{ e^{2\pi i(j+1/2)/2k}: j=1,\dots,2k\}}$. In this case the de Bruijn-Newman constant remains finite except in the degenerate case ${k=g}$, in which case ${\Lambda = -\infty}$.

For instance, consider the case when ${g=1}$ and ${P(0,z) = z^2 - a_1 z + q}$ for some real ${a_1}$ with ${|a_1| \leq 2\sqrt{q}}$. Then the quadratic polynomial

$\displaystyle P(t,z) = e^t z^2 - a_1 z + e^t q$

has zeroes

$\displaystyle \frac{a_1 \pm \sqrt{a_1^2 - 4 e^{2t} q}}{2e^t}$

and one easily checks that these zeroes lie on the circle ${\{ |z|=\sqrt{q}\}}$ when ${t \geq \log \frac{|a_1|}{2\sqrt{q}}}$, and are on the real axis otherwise. Thus in this case we have ${\Lambda = \log \frac{|a_1|}{2\sqrt{q}}}$ (with ${\Lambda=-\infty}$ if ${a_1=0}$). Note how as ${t}$ increases to ${+\infty}$, the zeroes repel each other and eventually converge to ${\pm i \sqrt{q}}$, while as ${t}$ decreases to ${-\infty}$, the zeroes collide and then separate on the real axis, with one zero going to the origin and the other to infinity.

The arguments in my paper with Brad Rodgers (discussed in this previous post) indicate that for a “typical” polynomial ${P}$ of degree ${g}$ that obeys the Riemann hypothesis, the expected time to relaxation to equilibrium (in which the zeroes are equally spaced) should be comparable to ${1/g}$, basically because the average spacing is ${1/g}$ and hence by (3) the typical velocity of the zeroes should be comparable to ${g}$, and the diameter of the unit circle is comparable to ${1}$, thus requiring time comparable to ${1/g}$ to reach equilibrium. Taking contrapositives, this suggests that the de Bruijn-Newman constant ${\Lambda}$ should typically take on values comparable to ${-1/g}$ (since typically one would not expect the initial configuration of zeroes to be close to evenly spaced). I have not attempted to formalise or prove this claim, but presumably one could do some numerics (perhaps using some of the examples of ${P}$ given previously) to explore this further.

The Polymath14 online collaboration has uploaded to the arXiv its paper “Homogeneous length functions on groups“, submitted to Algebra & Number Theory. The paper completely classifies homogeneous length functions ${\| \|: G \rightarrow {\bf R}^+}$ on an arbitrary group ${G = (G,\cdot,e,()^{-1})}$, that is to say non-negative functions that obey the symmetry condition ${\|x^{-1}\| = \|x\|}$, the non-degeneracy condition ${\|x\|=0 \iff x=e}$, the triangle inequality ${\|xy\| \leq \|x\| + \|y\|}$, and the homogeneity condition ${\|x^2\| = 2\|x\|}$. It turns out that these norms can only arise from pulling back the norm of a Banach space by an isometric embedding of the group. Among other things, this shows that ${G}$ can only support a homogeneous length function if and only if it is abelian and torsion free, thus giving a metric description of this property.

The proof is based on repeated use of the homogeneous length function axioms, combined with elementary identities of commutators, to obtain increasingly good bounds on quantities such as ${\|[x,y]\|}$, until one can show that such norms have to vanish. See the previous post for a full proof. The result is robust in that it allows for some loss in the triangle inequality and homogeneity condition, allowing for some new results on “quasinorms” on groups that relate to quasihomomorphisms.

As there are now a large number of comments on the previous post on this project, this post will also serve as the new thread for any final discussion of this project as it winds down.

In the tradition of “Polymath projects“, the problem posed in the previous two blog posts has now been solved, thanks to the cumulative effect of many small contributions by many participants (including, but not limited to, Sean Eberhard, Tobias Fritz, Siddharta Gadgil, Tobias Hartnick, Chris Jerdonek, Apoorva Khare, Antonio Machiavelo, Pace Nielsen, Andy Putman, Will Sawin, Alexander Shamov, Lior Silberman, and David Speyer). In this post I’ll write down a streamlined resolution, eliding a number of important but ultimately removable partial steps and insights made by the above contributors en route to the solution.

Theorem 1 Let ${G = (G,\cdot)}$ be a group. Suppose one has a “seminorm” function ${\| \|: G \rightarrow [0,+\infty)}$ which obeys the triangle inequality

$\displaystyle \|xy \| \leq \|x\| + \|y\|$

for all ${x,y \in G}$, with equality whenever ${x=y}$. Then the seminorm factors through the abelianisation map ${G \mapsto G/[G,G]}$.

Proof: By the triangle inequality, it suffices to show that ${\| [x,y]\| = 0}$ for all ${x,y \in G}$, where ${[x,y] := xyx^{-1}y^{-1}}$ is the commutator.

We first establish some basic facts. Firstly, by hypothesis we have ${\|x^2\| = 2 \|x\|}$, and hence ${\|x^n \| = n \|x\|}$ whenever ${n}$ is a power of two. On the other hand, by the triangle inequality we have ${\|x^n \| \leq n\|x\|}$ for all positive ${n}$, and hence by the triangle inequality again we also have the matching lower bound, thus

$\displaystyle \|x^n \| = n \|x\|$

for all ${n > 0}$. The claim is also true for ${n=0}$ (apply the preceding bound with ${x=1}$ and ${n=2}$). By replacing ${\|x\|}$ with ${\max(\|x\|, \|x^{-1}\|)}$ if necessary we may now also assume without loss of generality that ${\|x^{-1} \| = \|x\|}$, thus

$\displaystyle \|x^n \| = |n| \|x\| \ \ \ \ \ (1)$

for all integers ${n}$.

Next, for any ${x,y \in G}$, and any natural number ${n}$, we have

$\displaystyle \|yxy^{-1} \| = \frac{1}{n} \| (yxy^{-1})^n \|$

$\displaystyle = \frac{1}{n} \| y x^n y^{-1} \|$

$\displaystyle \leq \frac{1}{n} ( \|y\| + n \|x\| + \|y\|^{-1} )$

so on taking limits as ${n \rightarrow \infty}$ we have ${\|yxy^{-1} \| \leq \|x\|}$. Replacing ${x,y}$ by ${yxy^{-1},y^{-1}}$ gives the matching lower bound, thus we have the conjugation invariance

$\displaystyle \|yxy^{-1} \| = \|x\|. \ \ \ \ \ (2)$

Next, we observe that if ${x,y,z,w}$ are such that ${x}$ is conjugate to both ${wy}$ and ${zw^{-1}}$, then one has the inequality

$\displaystyle \|x\| \leq \frac{1}{2} ( \|y \| + \| z \| ). \ \ \ \ \ (3)$

Indeed, if we write ${x = swys^{-1} = t zw^{-1} t^{-1}}$ for some ${s,t \in G}$, then for any natural number ${n}$ one has

$\displaystyle \|x\| = \frac{1}{2n} \| x^n x^n \|$

$\displaystyle = \frac{1}{2n} \| swy \dots wy s^{-1}t zw^{-1} \dots zw^{-1} t^{-1} \|$

where the ${wy}$ and ${zw^{-1}}$ terms each appear ${n}$ times. From (2) we see that conjugation by ${w}$ does not affect the norm. Using this and the triangle inequality several times, we conclude that

$\displaystyle \|x\| \leq \frac{1}{2n} ( \|s\| + n \|y\| + \| s^{-1} t\| + n \|z\| + \|t^{-1} \| ),$

and the claim (3) follows by sending ${n \rightarrow \infty}$.

The following special case of (3) will be of particular interest. Let ${x,y \in G}$, and for any integers ${m,k}$, define the quantity

$\displaystyle f(m,k) := \| x^m [x,y]^k \|.$

Observe that ${x^m [x,y]^k}$ is conjugate to both ${x (x^{m-1} [x,y]^k)}$ and to ${(y^{-1} x^m [x,y]^{k-1} xy) x^{-1}}$, hence by (3) one has

$\displaystyle \| x^m [x,y]^k \| \leq \frac{1}{2} ( \| x^{m-1} [x,y]^k \| + \| y^{-1} x^{m} [x,y]^{k-1} xy \|)$

which by (2) leads to the recursive inequality

$\displaystyle f(m,k) \leq \frac{1}{2} (f(m-1,k) + f(m+1,k-1)).$

We can write this in probabilistic notation as

$\displaystyle f(m,k) \leq {\bf E} f( (m,k) + X )$

where ${X}$ is a random vector that takes the values ${(-1,0)}$ and ${(1,-1)}$ with probability ${1/2}$ each. Iterating this, we conclude in particular that for any large natural number ${n}$, one has

$\displaystyle f(0,n) \leq {\bf E} f( Z )$

where ${Z := (0,n) + X_1 + \dots + X_{2n}}$ and ${X_1,\dots,X_{2n}}$ are iid copies of ${X}$. We can write ${Z = (1,-1/2) (Y_1 + \dots + Y_{2n})}$ where $Y_1,\dots,Y_{2n} = \pm 1$ are iid signs.  By the triangle inequality, we thus have

$\displaystyle f( Z ) \leq |Y_1+\dots+Y_{2n}| (\|x\| + \frac{1}{2} \| [x,y] \|),$

noting that $Y_1+\dots+Y_{2n}$ is an even integer.  On the other hand, $Y_1+\dots+Y_{2n}$ has mean zero and variance $2n$, hence by Cauchy-Schwarz

$\displaystyle f(0,n) \leq \sqrt{2n}( \|x\| + \frac{1}{2} \| [x,y] \|).$

But by (1), the left-hand side is equal to ${n \| [x,y]\|}$. Dividing by ${n}$ and then sending ${n \rightarrow \infty}$, we obtain the claim. $\Box$

The above theorem reduces such seminorms to abelian groups. It is easy to see from (1) that any torsion element of such groups has zero seminorm, so we can in fact restrict to torsion-free groups, which we now write using additive notation ${G = (G,+)}$, thus for instance ${\| nx \| = |n| \|x\|}$ for ${n \in {\bf Z}}$. We think of ${G}$ as a ${{\bf Z}}$-module. One can then extend the seminorm to the associated ${{\bf Q}}$-vector space ${G \otimes_{\bf Z} {\bf Q}}$ by the formula ${\|\frac{a}{b} x\| := \frac{a}{b} \|x\|}$, and then to the associated ${{\bf R}}$-vector space ${G \otimes_{\bf Z} {\bf R}}$ by continuity, at which point it becomes a genuine seminorm (provided we have ensured the symmetry condition ${\|x\| = \|x^{-1}\|}$). Conversely, any seminorm on ${G \otimes_{\bf Z} {\bf R}}$ induces a seminorm on ${G}$. (These arguments also appear in this paper of Khare and Rajaratnam.)

This post is a continuation of the previous post, which has attracted a large number of comments. I’m recording here some calculations that arose from those comments (particularly those of Pace Nielsen, Lior Silberman, Tobias Fritz, and Apoorva Khare). Please feel free to either continue these calculations or to discuss other approaches to the problem, such as those mentioned in the remaining comments to the previous post.

Let ${F_2}$ be the free group on two generators ${a,b}$, and let ${\| \|: F_2 \rightarrow {\bf R}^+}$ be a quantity obeying the triangle inequality

$\displaystyle \| xy\| \leq \|x \| + \|y\|$

and the linear growth property

$\displaystyle \| x^n \| = |n| \| x\|$

for all ${x,y \in F_2}$ and integers ${n \in {\bf Z}}$; this implies the conjugation invariance

$\displaystyle \| y^{-1} x y \| = \|x\|$

or equivalently

$\displaystyle \| xy \| = \| yx\|$

We consider inequalities of the form

$\displaystyle \| xyx^{-1}y^{-1} \| \leq \alpha \|x\| + \beta \| y\| \ \ \ \ \ (1)$

or

$\displaystyle \| xyx^{-2}y^{-1} \| \leq \gamma \|x\| + \delta \| y\| \ \ \ \ \ (2)$

for various real numbers ${\alpha,\beta,\gamma,\delta}$. For instance, since

$\displaystyle \| xyx^{-1}y^{-1} \| \leq \| xyx^{-1}\| + \|y^{-1} \| = \|y\| + \|y\|$

we have (1) for ${(\alpha,\beta) = (2,0)}$. We also have the following further relations:

Proposition 1

• (i) If (1) holds for ${(\alpha,\beta)}$, then it holds for ${(\beta,\alpha)}$.
• (ii) If (1) holds for ${(\alpha,\beta)}$, then (2) holds for ${(\alpha+1, \frac{\beta}{2})}$.
• (iii) If (2) holds for ${(\gamma,\delta)}$, then (1) holds for ${(\frac{2\gamma}{3}, \frac{2\delta}{3})}$.
• (iv) If (1) holds for ${(\alpha,\beta)}$ and (2) holds for ${(\gamma,\delta)}$, then (1) holds for ${(\frac{2\alpha+1+\gamma}{4}, \frac{\delta+\beta}{4})}$.

Proof: For (i) we simply observe that

$\displaystyle \| xyx^{-1} y^{-1} \| = \| (xyx^{-1} y^{-1})^{-1} \| = \| y^{-1} x^{-1} y x \| = \| y x y^{-1} x^{-1} \|.$

For (ii), we calculate

$\displaystyle \| xyx^{-2}y^{-1} \| = \frac{1}{2}\| (xyx^{-2}y^{-1})^2 \|$

$\displaystyle = \frac{1}{2} \| (xyx^{-2}y^{-1} x) (yx^{-2} y^{-1}) \|$

$\displaystyle \leq \frac{1}{2} (\| xyx^{-2}y^{-1} x\| + \|yx^{-2} y^{-1}\|)$

$\displaystyle \leq \frac{1}{2} ( \| x^2 y x^{-2} y^{-1} \| + 2 \|x\| )$

$\displaystyle \leq \frac{1}{2} ( 2 \alpha \|x\| + \beta \|y\| + 2 \|x\|)$

giving the claim.

For (iii), we calculate

$\displaystyle \| xyx^{-1}y^{-1}\| = \frac{1}{3} \| (xyx^{-1}y^{-1})^3 \|$

$\displaystyle = \frac{1}{3} \| (xyx) (x^{-2} y^{-1} xy) (xyx)^{-1} (x^2 y x^{-1} y^{-1}) \|$

$\displaystyle \leq \frac{1}{3} ( \| x^{-2} y^{-1} xy\| + \| x^2 y x^{-1} y^{-1}\| )$

$\displaystyle = \frac{1}{3} ( \| xy x^{-2} y^{-1} \| + \|x^{-1} y^{-1} x^2 y \| )$

$\displaystyle \leq \frac{1}{3} ( \gamma \|x\| + \delta \|y\| + \gamma \|x\| + \delta \|y\|)$

giving the claim.

For (iv), we calculate

$\displaystyle \| xyx^{-1}y^{-1}\| = \frac{1}{4} \| (xyx^{-1}y^{-1})^4 \|$

$\displaystyle = \frac{1}{4} \| (xy) (x^{-1} y^{-1} x) (y x^{-1} y^{-1}) (xyx^{-1}) (xy)^{-1} (x^2yx^{-1}y^{-1}) \|$

$\displaystyle \leq \frac{1}{4} ( \| (x^{-1} y^{-1} x) (y x^{-1} y^{-1}) (xyx^{-1}) \| + \|x^2yx^{-1}y^{-1}\| )$

$\displaystyle \leq \frac{1}{4} ( \|(y x^{-1} y^{-1}) (xy^{-1}x^{-1})(x^{-1} y x) \| + \gamma \|x\| + \delta \|y\|)$

$\displaystyle \leq \frac{1}{4} ( \|x\| + \|(xy^{-1}x^{-1})(x^{-1} y x) \| + \gamma \|x\| + \delta \|y\|)$

$\displaystyle = \frac{1}{4} ( \|x\| + \|x^{-2} y x^2 y^{-1} \|+ \gamma \|x\| + \delta \|y\|)$

$\displaystyle \leq \frac{1}{4} ( \|x\| + 2\alpha \|x\| + \beta \|y\| + \gamma \|x\| + \delta \|y\|)$

giving the claim. $\Box$

Here is a typical application of the above estimates. If (1) holds for ${(\alpha,\beta)}$, then by part (i) it holds for ${(\beta,\alpha)}$, then by (ii) (2) holds for ${(\beta+1,\frac{\alpha}{2})}$, then by (iv) (1) holds for ${(\frac{3\beta+2}{4}, \frac{3\alpha}{8})}$. The map ${(\alpha,\beta) \mapsto (\frac{3\beta+2}{4}, \frac{3\alpha}{8})}$ has fixed point ${(\alpha,\beta) = (\frac{16}{23}, \frac{6}{23})}$, thus

$\displaystyle \| xyx^{-1}y^{-1} \| \leq \frac{16}{23} \|x\| + \frac{6}{23} \|y\|.$

For instance, if ${\|a\|, \|b\| \leq 1}$, then ${\|aba^{-1}b^{-1} \| \leq 22/23 = 0.95652\dots}$.

Here is a curious question posed to me by Apoorva Khare that I do not know the answer to. Let ${F_2}$ be the free group on two generators ${a,b}$. Does there exist a metric ${d}$ on this group which is

• bi-invariant, thus ${d(xg,yg)=d(gx,gy) = d(x,y)}$ for all ${x,y,g \in F_2}$; and
• linear growth in the sense that ${d(x^n,1) = n d(x,1)}$ for all ${x \in F_2}$ and all natural numbers ${n}$?

By defining the “norm” of an element ${x \in F_2}$ to be ${\| x\| := d(x,1)}$, an equivalent formulation of the problem asks if there exists a non-negative norm function ${\| \|: F_2 \rightarrow {\bf R}^+}$ that obeys the conjugation invariance

$\displaystyle \| gxg^{-1} \| = \|x \| \ \ \ \ \ (1)$

for all ${x,g \in F_2}$, the triangle inequality

$\displaystyle \| xy \| \leq \| x\| + \| y\| \ \ \ \ \ (2)$

for all ${x,y \in F_2}$, and the linear growth

$\displaystyle \| x^n \| = |n| \|x\| \ \ \ \ \ (3)$

for all ${x \in F_2}$ and ${n \in {\bf Z}}$, and such that ${\|x\| > 0}$ for all non-identity ${x \in F_2}$. Indeed, if such a norm exists then one can just take ${d(x,y) := \| x y^{-1} \|}$ to give the desired metric.

One can normalise the norm of the generators to be at most ${1}$, thus

$\displaystyle \| a \|, \| b \| \leq 1.$

This can then be used to upper bound the norm of other words in ${F_2}$. For instance, from (1), (3) one has

$\displaystyle \| aba^{-1} \|, \| b^{-1} a b \|, \| a^{-1} b^{-1} a \|, \| bab^{-1}\| \leq 1.$

A bit less trivially, from (3), (2), (1) one can bound commutators as

$\displaystyle \| aba^{-1} b^{-1} \| = \frac{1}{3} \| (aba^{-1} b^{-1})^3 \|$

$\displaystyle = \frac{1}{3} \| (aba^{-1}) (b^{-1} ab) (a^{-1} b^{-1} a) (b ab^{-1}) \|$

$\displaystyle \leq \frac{4}{3}.$

In a similar spirit one has

$\displaystyle \| aba^{-2} b^{-1} \| = \frac{1}{2} \| (aba^{-2} b^{-1})^2 \|$

$\displaystyle = \frac{1}{2} \| (aba^{-1}) (a^{-1} b^{-1} a) (ba^{-1} b^{-1}) (ba^{-1} b^{-1}) \|$

$\displaystyle \leq 2.$

What is not clear to me is if one can keep arguing like this to continually improve the upper bounds on the norm ${\| g\|}$ of a given non-trivial group element ${g}$ to the point where this norm must in fact vanish, which would demonstrate that no metric with the above properties on ${F_2}$ would exist (and in fact would impose strong constraints on similar metrics existing on other groups as well). It is also tempting to use some ideas from geometric group theory (e.g. asymptotic cones) to try to understand these metrics further, though I wasn’t able to get very far with this approach. Anyway, this feels like a problem that might be somewhat receptive to a more crowdsourced attack, so I am posing it here in case any readers wish to try to make progress on it.

How many groups of order four are there? Technically, there are an enormous number, so much so, in fact, that the class of groups of order four is not even a set, but merely a proper class. This is because any four objects ${a,b,c,d}$ can be turned into a group ${\{a,b,c,d\}}$ by designating one of the four objects, say ${a}$, to be the group identity, and imposing a suitable multiplication table (and inversion law) on the four elements in a manner that obeys the usual group axioms. Since all sets are themselves objects, the class of four-element groups is thus at least as large as the class of all sets, which by Russell’s paradox is known not to itself be a set (assuming the usual ZFC axioms of set theory).

A much better question is to ask how many groups of order four there are up to isomorphism, counting each isomorphism class of groups exactly once. Now, as one learns in undergraduate group theory classes, the answer is just “two”: the cyclic group ${C_4}$ and the Klein four-group ${C_2 \times C_2}$.

More generally, given a class of objects ${X}$ and some equivalence relation ${\sim}$ on ${X}$ (which one should interpret as describing the property of two objects in ${X}$ being “isomorphic”), one can consider the number ${|X / \sim|}$ of objects in ${X}$ “up to isomorphism”, which is simply the cardinality of the collection ${X/\sim}$ of equivalence classes ${[x]:=\{y\in X:x \sim {}y \}}$ of ${X}$. In the case where ${X}$ is finite, one can express this cardinality by the formula

$\displaystyle |X/\sim| = \sum_{x \in X} \frac{1}{|[x]|}, \ \ \ \ \ (1)$

thus one counts elements in ${X}$, weighted by the reciprocal of the number of objects they are isomorphic to.

Example 1 Let ${X}$ be the five-element set ${\{-2,-1,0,1,2\}}$ of integers between ${-2}$ and ${2}$. Let us say that two elements ${x, y}$ of ${X}$ are isomorphic if they have the same magnitude: ${x \sim y \iff |x| = |y|}$. Then the quotient space ${X/\sim}$ consists of just three equivalence classes: ${\{-2,2\} = [2] = [-2]}$, ${\{-1,1\} = [-1] = [1]}$, and ${\{0\} = [0]}$. Thus there are three objects in ${X}$ up to isomorphism, and the identity (1) is then just

$\displaystyle 3 = \frac{1}{2} + \frac{1}{2} + 1 + \frac{1}{2} + \frac{1}{2}.$

Thus, to count elements in ${X}$ up to equivalence, the elements ${-2,-1,1,2}$ are given a weight of ${1/2}$ because they are each isomorphic to two elements in ${X}$, while the element ${0}$ is given a weight of ${1}$ because it is isomorphic to just one element in ${X}$ (namely, itself).

Given a finite probability set ${X}$, there is also a natural probability distribution on ${X}$, namely the uniform distribution, according to which a random variable ${\mathbf{x} \in X}$ is set equal to any given element ${x}$ of ${X}$ with probability ${\frac{1}{|X|}}$:

$\displaystyle {\bf P}( \mathbf{x} = x ) = \frac{1}{|X|}.$

Given a notion ${\sim}$ of isomorphism on ${X}$, one can then define the random equivalence class ${[\mathbf{x}] \in X/\sim}$ that the random element ${\mathbf{x}}$ belongs to. But if the isomorphism classes are unequal in size, we now encounter a biasing effect: even if ${\mathbf{x}}$ was drawn uniformly from ${X}$, the equivalence class ${[\mathbf{x}]}$ need not be uniformly distributed in ${X/\sim}$. For instance, in the above example, if ${\mathbf{x}}$ was drawn uniformly from ${\{-2,-1,0,1,2\}}$, then the equivalence class ${[\mathbf{x}]}$ will not be uniformly distributed in the three-element space ${X/\sim}$, because it has a ${2/5}$ probability of being equal to the class ${\{-2,2\}}$ or to the class ${\{-1,1\}}$, and only a ${1/5}$ probability of being equal to the class ${\{0\}}$.

However, it is possible to remove this bias by changing the weighting in (1), and thus changing the notion of what cardinality means. To do this, we generalise the previous situation. Instead of considering sets ${X}$ with an equivalence relation ${\sim}$ to capture the notion of isomorphism, we instead consider groupoids, which are sets ${X}$ in which there are allowed to be multiple isomorphisms between elements in ${X}$ (and in particular, there are allowed to be multiple automorphisms from an element to itself). More precisely:

Definition 2 A groupoid is a set (or proper class) ${X}$, together with a (possibly empty) collection ${\mathrm{Iso}(x \rightarrow y)}$ of “isomorphisms” between any pair ${x,y}$ of elements of ${X}$, and a composition map ${f, g \mapsto g \circ f}$ from isomorphisms ${f \in \mathrm{Iso}(x \rightarrow y)}$, ${g \in \mathrm{Iso}(y \rightarrow z)}$ to isomorphisms in ${\mathrm{Iso}(x \rightarrow z)}$ for every ${x,y,z \in X}$, obeying the following group-like axioms:

• (Identity) For every ${x \in X}$, there is an identity isomorphism ${\mathrm{id}_x \in \mathrm{Iso}(x \rightarrow x)}$, such that ${f \circ \mathrm{id}_x = \mathrm{id}_y \circ f = f}$ for all ${f \in \mathrm{Iso}(x \rightarrow y)}$ and ${x,y \in X}$.
• (Associativity) If ${f \in \mathrm{Iso}(x \rightarrow y)}$, ${g \in \mathrm{Iso}(y \rightarrow z)}$, and ${h \in \mathrm{Iso}(z \rightarrow w)}$ for some ${x,y,z,w \in X}$, then ${h \circ (g \circ f) = (h \circ g) \circ f}$.
• (Inverse) If ${f \in \mathrm{Iso}(x \rightarrow y)}$ for some ${x,y \in X}$, then there exists an inverse isomorphism ${f^{-1} \in \mathrm{Iso}(y \rightarrow x)}$ such that ${f \circ f^{-1} = \mathrm{id}_y}$ and ${f^{-1} \circ f = \mathrm{id}_x}$.

We say that two elements ${x,y}$ of a groupoid are isomorphic, and write ${x \sim y}$, if there is at least one isomorphism from ${x}$ to ${y}$.

Example 3 Any category gives a groupoid by taking ${X}$ to be the set (or class) of objects, and ${\mathrm{Iso}(x \rightarrow y)}$ to be the collection of invertible morphisms from ${x}$ to ${y}$. For instance, in the category ${\mathbf{Set}}$ of sets, ${\mathrm{Iso}(x \rightarrow y)}$ would be the collection of bijections from ${x}$ to ${y}$; in the category ${\mathbf{Vec}/k}$ of linear vector spaces over some given base field ${k}$, ${\mathrm{Iso}(x \rightarrow y)}$ would be the collection of invertible linear transformations from ${x}$ to ${y}$; and so forth.

Every set ${X}$ equipped with an equivalence relation ${\sim}$ can be turned into a groupoid by assigning precisely one isomorphism ${\iota_{x \rightarrow y}}$ from ${x}$ to ${y}$ for any pair ${x,y \in X}$ with ${x \sim y}$, and no isomorphisms from ${x}$ to ${y}$ when ${x \not \sim y}$, with the groupoid operations of identity, composition, and inverse defined in the only way possible consistent with the axioms. We will call this the simply connected groupoid associated with this equivalence relation. For instance, with ${X = \{-2,-1,0,1,2\}}$ as above, if we turn ${X}$ into a simply connected groupoid, there will be precisely one isomorphism from ${2}$ to ${-2}$, and also precisely one isomorphism from ${2}$ to ${2}$, but no isomorphisms from ${2}$ to ${-1}$, ${0}$, or ${1}$.

However, one can also form multiply-connected groupoids in which there can be multiple isomorphisms from one element of ${X}$ to another. For instance, one can view ${X = \{-2,-1,0,1,2\}}$ as a space that is acted on by multiplication by the two-element group ${\{-1,+1\}}$. This gives rise to two types of isomorphisms, an identity isomorphism ${(+1)_x}$ from ${x}$ to ${x}$ for each ${x \in X}$, and a negation isomorphism ${(-1)_x}$ from ${x}$ to ${-x}$ for each ${x \in X}$; in particular, there are two automorphisms of ${0}$ (i.e., isomorphisms from ${0}$ to itself), namely ${(+1)_0}$ and ${(-1)_0}$, whereas the other four elements of ${X}$ only have a single automorphism (the identity isomorphism). One defines composition, identity, and inverse in this groupoid in the obvious fashion (using the group law of the two-element group ${\{-1,+1\}}$); for instance, we have ${(-1)_{-2} \circ (-1)_2 = (+1)_2}$.

For a finite multiply-connected groupoid, it turns out that the natural notion of “cardinality” (or as I prefer to call it, “cardinality up to isomorphism”) is given by the variant

$\displaystyle \sum_{x \in X} \frac{1}{|\{ f: f \in \mathrm{Iso}(x \rightarrow y) \hbox{ for some } y\}|}$

of (1). That is to say, in the multiply connected case, the denominator is no longer the number of objects ${y}$ isomorphic to ${x}$, but rather the number of isomorphisms from ${x}$ to other objects ${y}$. Grouping together all summands coming from a single equivalence class ${[x]}$ in ${X/\sim}$, we can also write this expression as

$\displaystyle \sum_{[x] \in X/\sim} \frac{1}{|\mathrm{Aut}(x)|} \ \ \ \ \ (2)$

where ${\mathrm{Aut}(x) := \mathrm{Iso}(x \rightarrow x)}$ is the automorphism group of ${x}$, that is to say the group of isomorphisms from ${x}$ to itself. (Note that if ${x,x'}$ belong to the same equivalence class ${[x]}$, then the two groups ${\mathrm{Aut}(x)}$ and ${\mathrm{Aut}(x')}$ will be isomorphic and thus have the same cardinality, and so the above expression is well-defined.

For instance, if we take ${X}$ to be the simply connected groupoid on ${\{-2,-1,0,1,2\}}$, then the number of elements of ${X}$ up to isomorphism is

$\displaystyle \frac{1}{2} + \frac{1}{2} + 1 + \frac{1}{2} + \frac{1}{2} = 1 + 1 + 1 = 3$

exactly as before. If however we take the multiply connected groupoid on ${\{-2,-1,0,1,2\}}$, in which ${0}$ has two automorphisms, the number of elements of ${X}$ up to isomorphism is now the smaller quantity

$\displaystyle \frac{1}{2} + \frac{1}{2} + \frac{1}{2} + \frac{1}{2} + \frac{1}{2} = 1 + \frac{1}{2} + 1 = \frac{5}{2};$

the equivalence class ${[0]}$ is now counted with weight ${1/2}$ rather than ${1}$ due to the two automorphisms on ${0}$. Geometrically, one can think of this groupoid as being formed by taking the five-element set ${\{-2,-1,0,1,2\}}$, and “folding it in half” around the fixed point ${0}$, giving rise to two “full” quotient points ${[1], [2]}$ and one “half” point ${[0]}$. More generally, given a finite group ${G}$ acting on a finite set ${X}$, and forming the associated multiply connected groupoid, the cardinality up to isomorphism of this groupoid will be ${|X|/|G|}$, since each element ${x}$ of ${X}$ will have ${|G|}$ isomorphisms on it (whether they be to the same element ${x}$, or to other elements of ${X}$).

The definition (2) can also make sense for some infinite groupoids; to my knowledge this was first explicitly done in this paper of Baez and Dolan. Consider for instance the category ${\mathbf{FinSet}}$ of finite sets, with isomorphisms given by bijections as in Example 3. Every finite set is isomorphic to ${\{1,\dots,n\}}$ for some natural number ${n}$, so the equivalence classes of ${\mathbf{FinSet}}$ may be indexed by the natural numbers. The automorphism group ${S_n}$ of ${\{1,\dots,n\}}$ has order ${n!}$, so the cardinality of ${\mathbf{FinSet}}$ up to isomorphism is

$\displaystyle \sum_{n=0}^\infty \frac{1}{n!} = e.$

(This fact is sometimes loosely stated as “the number of finite sets is ${e}$“, but I view this statement as somewhat misleading if the qualifier “up to isomorphism” is not added.) Similarly, when one allows for multiple isomorphisms from a group to itself, the number of groups of order four up to isomorphism is now

$\displaystyle \frac{1}{2} + \frac{1}{6} = \frac{2}{3}$

because the cyclic group ${C_4}$ has two automorphisms, whereas the Klein four-group ${C_2 \times C_2}$ has six.

In the case that the cardinality of a groupoid ${X}$ up to isomorphism is finite and non-empty, one can now define the notion of a random isomorphism class ${[\mathbf{x}]}$ in ${X/\sim}$ drawn “uniformly up to isomorphism”, by requiring the probability of attaining any given isomorphism class ${[x]}$ to be

$\displaystyle {\mathbf P}([\mathbf{x}] = [x]) = \frac{1 / |\mathrm{Aut}(x)|}{\sum_{[y] \in X/\sim} 1/|\mathrm{Aut}(y)|},$

thus the probability of being isomorphic to a given element ${x}$ will be inversely proportional to the number of automorphisms that ${x}$ has. For instance, if we take ${X}$ to be the set ${\{-2,-1,0,1,2\}}$ with the simply connected groupoid, ${[\mathbf{x}]}$ will be drawn uniformly from the three available equivalence classes ${[0], [1], [2]}$, with a ${1/3}$ probability of attaining each; but if instead one uses the multiply connected groupoid coming from the action of ${\{-1,+1\}}$, and draws ${[\mathbf{x}]}$ uniformly up to isomorphism, then ${[1]}$ and ${[2]}$ will now be selected with probability ${2/5}$ each, and ${[0]}$ will be selected with probability ${1/5}$. Thus this distribution has accounted for the bias mentioned previously: if a finite group ${G}$ acts on a finite space ${X}$, and ${\mathbf{x}}$ is drawn uniformly from ${X}$, then ${[\mathbf{x}]}$ now still be drawn uniformly up to isomorphism from ${X/G}$, if we use the multiply connected groupoid coming from the ${G}$ action, rather than the simply connected groupoid coming from just the ${G}$-orbit structure on ${X}$.

Using the groupoid of finite sets, we see that a finite set chosen uniformly up to isomorphism will have a cardinality that is distributed according to the Poisson distribution of parameter ${1}$, that is to say it will be of cardinality ${n}$ with probability ${\frac{e^{-1}}{n!}}$.

One important source of groupoids are the fundamental groupoids ${\pi_1(M)}$ of a manifold ${M}$ (one can also consider more general topological spaces than manifolds, but for simplicity we will restrict this discussion to the manifold case), in which the underlying space is simply ${M}$, and the isomorphisms from ${x}$ to ${y}$ are the equivalence classes of paths from ${x}$ to ${y}$ up to homotopy; in particular, the automorphism group of any given point is just the fundamental group of ${M}$ at that base point. The equivalence class ${[x]}$ of a point in ${M}$ is then the connected component of ${x}$ in ${M}$. The cardinality up to isomorphism of the fundamental groupoid is then

$\displaystyle \sum_{M' \in \pi_0(M)} \frac{1}{|\pi_1(M')|}$

where ${\pi_0(M)}$ is the collection of connected components ${M'}$ of ${M}$, and ${|\pi_1(M')|}$ is the order of the fundamental group of ${M'}$. Thus, simply connected components of ${M}$ count for a full unit of cardinality, whereas multiply connected components (which can be viewed as quotients of their simply connected cover by their fundamental group) will count for a fractional unit of cardinality, inversely to the order of their fundamental group.

This notion of cardinality up to isomorphism of a groupoid behaves well with respect to various basic notions. For instance, suppose one has an ${n}$-fold covering map ${\pi: X \rightarrow Y}$ of one finite groupoid ${Y}$ by another ${X}$. This means that ${\pi}$ is a functor that is surjective, with all preimages of cardinality ${n}$, with the property that given any pair ${y,y'}$ in the base space ${Y}$ and any ${x}$ in the preimage ${\pi^{-1}(\{y\})}$ of ${y}$, every isomorphism ${f \in \mathrm{Iso}(y \rightarrow y')}$ has a unique lift ${\tilde f \in \mathrm{Iso}(x \rightarrow x')}$ from the given initial point ${x}$ (and some ${x'}$ in the preimage of ${y'}$). Then one can check that the cardinality up to isomorphism of ${X}$ is ${n}$ times the cardinality up to isomorphism of ${Y}$, which fits well with the geometric picture of ${X}$ as the ${n}$-fold cover of ${Y}$. (For instance, if one covers a manifold ${M}$ with finite fundamental group by its universal cover, this is a ${|\pi_1(M)|}$-fold cover, the base has cardinality ${1/|\pi_1(M)|}$ up to isomorphism, and the universal cover has cardinality one up to isomorphism.) Related to this, if one draws an equivalence class ${[\mathrm{x}]}$ of ${X}$ uniformly up to isomorphism, then ${\pi([\mathrm{x}])}$ will be an equivalence class of ${Y}$ drawn uniformly up to isomorphism also.

Indeed, one can show that this notion of cardinality up to isomorphism for groupoids is uniquely determined by a small number of axioms such as these (similar to the axioms that determine Euler characteristic); see this blog post of Qiaochu Yuan for details.

The probability distributions on isomorphism classes described by the above recipe seem to arise naturally in many applications. For instance, if one draws a profinite abelian group up to isomorphism at random in this fashion (so that each isomorphism class ${[G]}$ of a profinite abelian group ${G}$ occurs with probability inversely proportional to the number of automorphisms of this group), then the resulting distribution is known as the Cohen-Lenstra distribution, and seems to emerge as the natural asymptotic distribution of many randomly generated profinite abelian groups in number theory and combinatorics, such as the class groups of random quadratic fields; see this previous blog post for more discussion. For a simple combinatorial example, the set of fixed points of a random permutation on ${n}$ elements will have a cardinality that converges in distribution to the Poisson distribution of rate ${1}$ (as discussed in this previous post), thus we see that the fixed points of a large random permutation asymptotically are distributed uniformly up to isomorphism. I’ve been told that this notion of cardinality up to isomorphism is also particularly compatible with stacks (which are a good framework to describe such objects as moduli spaces of algebraic varieties up to isomorphism), though I am not sufficiently acquainted with this theory to say much more than this.