You are currently browsing the category archive for the ‘math.GR’ category.

This is a sequel to this previous blog post, in which we discussed the effect of the heat flow evolution

\displaystyle  \partial_t P(t,z) = \partial_{zz} P(t,z)

on the zeroes of a time-dependent family of polynomials {z \mapsto P(t,z)}, with a particular focus on the case when the polynomials {z \mapsto P(t,z)} had real zeroes. Here (inspired by some discussions I had during a recent conference on the Riemann hypothesis in Bristol) we record the analogous theory in which the polynomials instead have zeroes on a circle {\{ z: |z| = \sqrt{q} \}}, with the heat flow slightly adjusted to compensate for this. As we shall discuss shortly, a key example of this situation arises when {P} is the numerator of the zeta function of a curve.

More precisely, let {g} be a natural number. We will say that a polynomial

\displaystyle  P(z) = \sum_{j=0}^{2g} a_j z^j

of degree {2g} (so that {a_{2g} \neq 0}) obeys the functional equation if the {a_j} are all real and

\displaystyle  a_j = q^{g-j} a_{2g-j}

for all {j=0,\dots,2g}, thus

\displaystyle  P(\overline{z}) = \overline{P(z)}

and

\displaystyle  P(q/z) = q^g z^{-2g} P(z)

for all non-zero {z}. This means that the {2g} zeroes {\alpha_1,\dots,\alpha_{2g}} of {P(z)} (counting multiplicity) lie in {{\bf C} \backslash \{0\}} and are symmetric with respect to complex conjugation {z \mapsto \overline{z}} and inversion {z \mapsto q/z} across the circle {\{ |z| = \sqrt{q}\}}. We say that this polynomial obeys the Riemann hypothesis if all of its zeroes actually lie on the circle {\{ z = \sqrt{q}\}}. For instance, in the {g=1} case, the polynomial {z^2 - a_1 z + q} obeys the Riemann hypothesis if and only if {|a_1| \leq 2\sqrt{q}}.

Such polynomials arise in number theory as follows: if {C} is a projective curve of genus {g} over a finite field {\mathbf{F}_q}, then, as famously proven by Weil, the associated local zeta function {\zeta_{C,q}(z)} (as defined for instance in this previous blog post) is known to take the form

\displaystyle  \zeta_{C,q}(z) = \frac{P(z)}{(1-z)(1-qz)}

where {P} is a degree {2g} polynomial obeying both the functional equation and the Riemann hypothesis. In the case that {C} is an elliptic curve, then {g=1} and {P} takes the form {P(z) = z^2 - a_1 z + q}, where {a_1} is the number of {{\bf F}_q}-points of {C} minus {q+1}. The Riemann hypothesis in this case is a famous result of Hasse.

Another key example of such polynomials arise from rescaled characteristic polynomials

\displaystyle  P(z) := \det( 1 - \sqrt{q} F ) \ \ \ \ \ (1)

of {2g \times 2g} matrices {F} in the compact symplectic group {Sp(g)}. These polynomials obey both the functional equation and the Riemann hypothesis. The Sato-Tate conjecture (in higher genus) asserts, roughly speaking, that “typical” polyomials {P} arising from the number theoretic situation above are distributed like the rescaled characteristic polynomials (1), where {F} is drawn uniformly from {Sp(g)} with Haar measure.

Given a polynomial {z \mapsto P(0,z)} of degree {2g} with coefficients

\displaystyle  P(0,z) = \sum_{j=0}^{2g} a_j(0) z^j,

we can evolve it in time by the formula

\displaystyle  P(t,z) = \sum_{j=0}^{2g} \exp( t(j-g)^2 ) a_j(0) z^j,

thus {a_j(t) = \exp(t(j-g)) a_j(0)} for {t \in {\bf R}}. Informally, as one increases {t}, this evolution accentuates the effect of the extreme monomials, particularly, {z^0} and {z^{2g}} at the expense of the intermediate monomials such as {z^g}, and conversely as one decreases {t}. This family of polynomials obeys the heat-type equation

\displaystyle  \partial_t P(t,z) = (z \partial_z - g)^2 P(t,z). \ \ \ \ \ (2)

In view of the results of Marcus, Spielman, and Srivastava, it is also very likely that one can interpret this flow in terms of expected characteristic polynomials involving conjugation over the compact symplectic group {Sp(n)}, and should also be tied to some sort of “{\beta=\infty}” version of Brownian motion on this group, but we have not attempted to work this connection out in detail.

It is clear that if {z \mapsto P(0,z)} obeys the functional equation, then so does {z \mapsto P(t,z)} for any other time {t}. Now we investigate the evolution of the zeroes. Suppose at some time {t_0} that the zeroes {\alpha_1(t_0),\dots,\alpha_{2g}(t_0)} of {z \mapsto P(t_0,z)} are distinct, then

\displaystyle  P(t_0,z) = a_{2g}(0) \exp( t_0g^2 ) \prod_{j=1}^{2g} (z - \alpha_j(t_0) ).

From the inverse function theorem we see that for times {t} sufficiently close to {t_0}, the zeroes {\alpha_1(t),\dots,\alpha_{2g}(t)} of {z \mapsto P(t,z)} continue to be distinct (and vary smoothly in {t}), with

\displaystyle  P(t,z) = a_{2g}(0) \exp( t g^2 ) \prod_{j=1}^{2g} (z - \alpha_j(t) ).

Differentiating this at any {z} not equal to any of the {\alpha_j(t)}, we obtain

\displaystyle  \partial_t P(t,z) = P(t,z) ( g^2 - \sum_{j=1}^{2g} \frac{\alpha'_j(t)}{z - \alpha_j(t)})

and

\displaystyle  \partial_z P(t,z) = P(t,z) ( \sum_{j=1}^{2g} \frac{1}{z - \alpha_j(t)})

and

\displaystyle  \partial_{zz} P(t,z) = P(t,z) ( \sum_{1 \leq j,k \leq 2g: j \neq k} \frac{1}{(z - \alpha_j(t))(z - \alpha_k(t))}).

Inserting these formulae into (2) (expanding {(z \partial_z - g)^2} as {z^2 \partial_{zz} - (2g-1) z \partial_z + g^2}) and canceling some terms, we conclude that

\displaystyle  - \sum_{j=1}^{2g} \frac{\alpha'_j(t)}{z - \alpha_j(t)} = z^2 \sum_{1 \leq j,k \leq 2g: j \neq k} \frac{1}{(z - \alpha_j(t))(z - \alpha_k(t))}

\displaystyle  - (2g-1) z \sum_{j=1}^{2g} \frac{1}{z - \alpha_j(t)}

for {t} sufficiently close to {t_0}, and {z} not equal to {\alpha_1(t),\dots,\alpha_{2g}(t)}. Extracting the residue at {z = \alpha_j(t)}, we conclude that

\displaystyle  - \alpha'_j(t) = 2 \alpha_j(t)^2 \sum_{1 \leq k \leq 2g: k \neq j} \frac{1}{\alpha_j(t) - \alpha_k(t)} - (2g-1) \alpha_j(t)

which we can rearrange as

\displaystyle  \frac{\alpha'_j(t)}{\alpha_j(t)} = - \sum_{1 \leq k \leq 2g: k \neq j} \frac{\alpha_j(t)+\alpha_k(t)}{\alpha_j(t)-\alpha_k(t)}.

If we make the change of variables {\alpha_j(t) = \sqrt{q} e^{i\theta_j(t)}} (noting that one can make {\theta_j} depend smoothly on {t} for {t} sufficiently close to {t_0}), this becomes

\displaystyle  \partial_t \theta_j(t) = \sum_{1 \leq k \leq 2g: k \neq j} \cot \frac{\theta_j(t) - \theta_k(t)}{2}. \ \ \ \ \ (3)

Intuitively, this equation asserts that the phases {\theta_j} repel each other if they are real (and attract each other if their difference is imaginary). If {z \mapsto P(t_0,z)} obeys the Riemann hypothesis, then the {\theta_j} are all real at time {t_0}, then the Picard uniqueness theorem (applied to {\theta_j(t)} and its complex conjugate) then shows that the {\theta_j} are also real for {t} sufficiently close to {t_0}. If we then define the entropy functional

\displaystyle  H(\theta_1,\dots,\theta_{2g}) := \sum_{1 \leq j < k \leq 2g} \log \frac{1}{|\sin \frac{\theta_j-\theta_k}{2}| }

then the above equation becomes a gradient flow

\displaystyle  \partial_t \theta_j(t) = - 2 \frac{\partial H}{\partial \theta_j}( \theta_1(t),\dots,\theta_{2g}(t) )

which implies in particular that {H(\theta_1(t),\dots,\theta_{2g}(t))} is non-increasing in time. This shows that as one evolves time forward from {t_0}, there is a uniform lower bound on the separation between the phases {\theta_1(t),\dots,\theta_{2g}(t)}, and hence the equation can be solved indefinitely; in particular, {z \mapsto P(t,z)} obeys the Riemann hypothesis for all {t > t_0} if it does so at time {t_0}. Our argument here assumed that the zeroes of {z \mapsto P(t_0,z)} were simple, but this assumption can be removed by the usual limiting argument.

For any polynomial {z \mapsto P(0,z)} obeying the functional equation, the rescaled polynomials {z \mapsto e^{-g^2 t} P(t,z)} converge locally uniformly to {a_{2g}(0) (z^{2g} + q^g)} as {t \rightarrow +\infty}. By Rouche’s theorem, we conclude that the zeroes of {z \mapsto P(t,z)} converge to the equally spaced points {\{ e^{2\pi i(j+1/2)/2g}: j=1,\dots,2g\}} on the circle {\{ |z| = \sqrt{q}\}}. Together with the symmetry properties of the zeroes, this implies in particular that {z \mapsto P(t,z)} obeys the Riemann hypothesis for all sufficiently large positive {t}. In the opposite direction, when {t \rightarrow -\infty}, the polynomials {z \mapsto P(t,z)} converge locally uniformly to {a_g(0) z^g}, so if {a_g(0) \neq 0}, {g} of the zeroes converge to the origin and the other {g} converge to infinity. In particular, {z \mapsto P(t,z)} fails the Riemann hypothesis for sufficiently large negative {t}. Thus (if {a_g(0) \neq 0}), there must exist a real number {\Lambda}, which we call the de Bruijn-Newman constant of the original polynomial {z \mapsto P(0,z)}, such that {z \mapsto P(t,z)} obeys the Riemann hypothesis for {t \geq \Lambda} and fails the Riemann hypothesis for {t < \Lambda}. The situation is a bit more complicated if {a_g(0)} vanishes; if {k} is the first natural number such that {a_{g+k}(0)} (or equivalently, {a_{g-j}(0)}) does not vanish, then by the above arguments one finds in the limit {t \rightarrow -\infty} that {g-k} of the zeroes go to the origin, {g-k} go to infinity, and the remaining {2k} zeroes converge to the equally spaced points {\{ e^{2\pi i(j+1/2)/2k}: j=1,\dots,2k\}}. In this case the de Bruijn-Newman constant remains finite except in the degenerate case {k=g}, in which case {\Lambda = -\infty}.

For instance, consider the case when {g=1} and {P(0,z) = z^2 - a_1 z + q} for some real {a_1} with {|a_1| \leq 2\sqrt{q}}. Then the quadratic polynomial

\displaystyle  P(t,z) = e^t z^2 - a_1 z + e^t q

has zeroes

\displaystyle  \frac{a_1 \pm \sqrt{a_1^2 - 4 e^{2t} q}}{2e^t}

and one easily checks that these zeroes lie on the circle {\{ |z|=\sqrt{q}\}} when {t \geq \log \frac{|a_1|}{2\sqrt{q}}}, and are on the real axis otherwise. Thus in this case we have {\Lambda = \log \frac{|a_1|}{2\sqrt{q}}} (with {\Lambda=-\infty} if {a_1=0}). Note how as {t} increases to {+\infty}, the zeroes repel each other and eventually converge to {\pm i \sqrt{q}}, while as {t} decreases to {-\infty}, the zeroes collide and then separate on the real axis, with one zero going to the origin and the other to infinity.

The arguments in my paper with Brad Rodgers (discussed in this previous post) indicate that for a “typical” polynomial {P} of degree {g} that obeys the Riemann hypothesis, the expected time to relaxation to equilibrium (in which the zeroes are equally spaced) should be comparable to {1/g}, basically because the average spacing is {1/g} and hence by (3) the typical velocity of the zeroes should be comparable to {g}, and the diameter of the unit circle is comparable to {1}, thus requiring time comparable to {1/g} to reach equilibrium. Taking contrapositives, this suggests that the de Bruijn-Newman constant {\Lambda} should typically take on values comparable to {-1/g} (since typically one would not expect the initial configuration of zeroes to be close to evenly spaced). I have not attempted to formalise or prove this claim, but presumably one could do some numerics (perhaps using some of the examples of {P} given previously) to explore this further.

The Polymath14 online collaboration has uploaded to the arXiv its paper “Homogeneous length functions on groups“, submitted to Algebra & Number Theory. The paper completely classifies homogeneous length functions {\| \|: G \rightarrow {\bf R}^+} on an arbitrary group {G = (G,\cdot,e,()^{-1})}, that is to say non-negative functions that obey the symmetry condition {\|x^{-1}\| = \|x\|}, the non-degeneracy condition {\|x\|=0 \iff x=e}, the triangle inequality {\|xy\| \leq \|x\| + \|y\|}, and the homogeneity condition {\|x^2\| = 2\|x\|}. It turns out that these norms can only arise from pulling back the norm of a Banach space by an isometric embedding of the group. Among other things, this shows that {G} can only support a homogeneous length function if and only if it is abelian and torsion free, thus giving a metric description of this property.

The proof is based on repeated use of the homogeneous length function axioms, combined with elementary identities of commutators, to obtain increasingly good bounds on quantities such as {\|[x,y]\|}, until one can show that such norms have to vanish. See the previous post for a full proof. The result is robust in that it allows for some loss in the triangle inequality and homogeneity condition, allowing for some new results on “quasinorms” on groups that relate to quasihomomorphisms.

As there are now a large number of comments on the previous post on this project, this post will also serve as the new thread for any final discussion of this project as it winds down.

In the tradition of “Polymath projects“, the problem posed in the previous two blog posts has now been solved, thanks to the cumulative effect of many small contributions by many participants (including, but not limited to, Sean Eberhard, Tobias Fritz, Siddharta Gadgil, Tobias Hartnick, Chris Jerdonek, Apoorva Khare, Antonio Machiavelo, Pace Nielsen, Andy Putman, Will Sawin, Alexander Shamov, Lior Silberman, and David Speyer). In this post I’ll write down a streamlined resolution, eliding a number of important but ultimately removable partial steps and insights made by the above contributors en route to the solution.

Theorem 1 Let {G = (G,\cdot)} be a group. Suppose one has a “seminorm” function {\| \|: G \rightarrow [0,+\infty)} which obeys the triangle inequality

\displaystyle \|xy \| \leq \|x\| + \|y\|

for all {x,y \in G}, with equality whenever {x=y}. Then the seminorm factors through the abelianisation map {G \mapsto G/[G,G]}.

Proof: By the triangle inequality, it suffices to show that {\| [x,y]\| = 0} for all {x,y \in G}, where {[x,y] := xyx^{-1}y^{-1}} is the commutator.

We first establish some basic facts. Firstly, by hypothesis we have {\|x^2\| = 2 \|x\|}, and hence {\|x^n \| = n \|x\|} whenever {n} is a power of two. On the other hand, by the triangle inequality we have {\|x^n \| \leq n\|x\|} for all positive {n}, and hence by the triangle inequality again we also have the matching lower bound, thus

\displaystyle \|x^n \| = n \|x\|

for all {n > 0}. The claim is also true for {n=0} (apply the preceding bound with {x=1} and {n=2}). By replacing {\|x\|} with {\max(\|x\|, \|x^{-1}\|)} if necessary we may now also assume without loss of generality that {\|x^{-1} \| = \|x\|}, thus

\displaystyle \|x^n \| = |n| \|x\| \ \ \ \ \ (1)

 

for all integers {n}.

Next, for any {x,y \in G}, and any natural number {n}, we have

\displaystyle \|yxy^{-1} \| = \frac{1}{n} \| (yxy^{-1})^n \|

\displaystyle = \frac{1}{n} \| y x^n y^{-1} \|

\displaystyle \leq \frac{1}{n} ( \|y\| + n \|x\| + \|y\|^{-1} )

so on taking limits as {n \rightarrow \infty} we have {\|yxy^{-1} \| \leq \|x\|}. Replacing {x,y} by {yxy^{-1},y^{-1}} gives the matching lower bound, thus we have the conjugation invariance

\displaystyle \|yxy^{-1} \| = \|x\|. \ \ \ \ \ (2)

 

Next, we observe that if {x,y,z,w} are such that {x} is conjugate to both {wy} and {zw^{-1}}, then one has the inequality

\displaystyle \|x\| \leq \frac{1}{2} ( \|y \| + \| z \| ). \ \ \ \ \ (3)

 

Indeed, if we write {x = swys^{-1} = t zw^{-1} t^{-1}} for some {s,t \in G}, then for any natural number {n} one has

\displaystyle \|x\| = \frac{1}{2n} \| x^n x^n \|

\displaystyle = \frac{1}{2n} \| swy \dots wy s^{-1}t zw^{-1} \dots zw^{-1} t^{-1} \|

where the {wy} and {zw^{-1}} terms each appear {n} times. From (2) we see that conjugation by {w} does not affect the norm. Using this and the triangle inequality several times, we conclude that

\displaystyle \|x\| \leq \frac{1}{2n} ( \|s\| + n \|y\| + \| s^{-1} t\| + n \|z\| + \|t^{-1} \| ),

and the claim (3) follows by sending {n \rightarrow \infty}.

The following special case of (3) will be of particular interest. Let {x,y \in G}, and for any integers {m,k}, define the quantity

\displaystyle f(m,k) := \| x^m [x,y]^k \|.

Observe that {x^m [x,y]^k} is conjugate to both {x (x^{m-1} [x,y]^k)} and to {(y^{-1} x^m [x,y]^{k-1} xy) x^{-1}}, hence by (3) one has

\displaystyle \| x^m [x,y]^k \| \leq \frac{1}{2} ( \| x^{m-1} [x,y]^k \| + \| y^{-1} x^{m} [x,y]^{k-1} xy \|)

which by (2) leads to the recursive inequality

\displaystyle f(m,k) \leq \frac{1}{2} (f(m-1,k) + f(m+1,k-1)).

We can write this in probabilistic notation as

\displaystyle f(m,k) \leq {\bf E} f( (m,k) + X )

where {X} is a random vector that takes the values {(-1,0)} and {(1,-1)} with probability {1/2} each. Iterating this, we conclude in particular that for any large natural number {n}, one has

\displaystyle f(0,n) \leq {\bf E} f( Z )

where {Z := (0,n) + X_1 + \dots + X_{2n}} and {X_1,\dots,X_{2n}} are iid copies of {X}. We can write {Z = (1,-1/2) (Y_1 + \dots + Y_{2n})} where Y_1,\dots,Y_{2n} = \pm 1 are iid signs.  By the triangle inequality, we thus have

\displaystyle f( Z ) \leq |Y_1+\dots+Y_{2n}| (\|x\| + \frac{1}{2} \| [x,y] \|),

noting that Y_1+\dots+Y_{2n} is an even integer.  On the other hand, Y_1+\dots+Y_{2n} has mean zero and variance 2n, hence by Cauchy-Schwarz

\displaystyle f(0,n) \leq \sqrt{2n}( \|x\| + \frac{1}{2} \| [x,y] \|).

But by (1), the left-hand side is equal to {n \| [x,y]\|}. Dividing by {n} and then sending {n \rightarrow \infty}, we obtain the claim. \Box

The above theorem reduces such seminorms to abelian groups. It is easy to see from (1) that any torsion element of such groups has zero seminorm, so we can in fact restrict to torsion-free groups, which we now write using additive notation {G = (G,+)}, thus for instance {\| nx \| = |n| \|x\|} for {n \in {\bf Z}}. We think of {G} as a {{\bf Z}}-module. One can then extend the seminorm to the associated {{\bf Q}}-vector space {G \otimes_{\bf Z} {\bf Q}} by the formula {\|\frac{a}{b} x\| := \frac{a}{b} \|x\|}, and then to the associated {{\bf R}}-vector space {G \otimes_{\bf Z} {\bf R}} by continuity, at which point it becomes a genuine seminorm (provided we have ensured the symmetry condition {\|x\| = \|x^{-1}\|}). Conversely, any seminorm on {G \otimes_{\bf Z} {\bf R}} induces a seminorm on {G}. (These arguments also appear in this paper of Khare and Rajaratnam.)

 

This post is a continuation of the previous post, which has attracted a large number of comments. I’m recording here some calculations that arose from those comments (particularly those of Pace Nielsen, Lior Silberman, Tobias Fritz, and Apoorva Khare). Please feel free to either continue these calculations or to discuss other approaches to the problem, such as those mentioned in the remaining comments to the previous post.

Let {F_2} be the free group on two generators {a,b}, and let {\| \|: F_2 \rightarrow {\bf R}^+} be a quantity obeying the triangle inequality

\displaystyle \| xy\| \leq \|x \| + \|y\|

and the linear growth property

\displaystyle \| x^n \| = |n| \| x\|

for all {x,y \in F_2} and integers {n \in {\bf Z}}; this implies the conjugation invariance

\displaystyle \| y^{-1} x y \| = \|x\|

or equivalently

\displaystyle \| xy \| = \| yx\|

We consider inequalities of the form

\displaystyle \| xyx^{-1}y^{-1} \| \leq \alpha \|x\| + \beta \| y\| \ \ \ \ \ (1)

or

\displaystyle \| xyx^{-2}y^{-1} \| \leq \gamma \|x\| + \delta \| y\| \ \ \ \ \ (2)

for various real numbers {\alpha,\beta,\gamma,\delta}. For instance, since

\displaystyle \| xyx^{-1}y^{-1} \| \leq \| xyx^{-1}\| + \|y^{-1} \| = \|y\| + \|y\|

we have (1) for {(\alpha,\beta) = (2,0)}. We also have the following further relations:

Proposition 1

  • (i) If (1) holds for {(\alpha,\beta)}, then it holds for {(\beta,\alpha)}.
  • (ii) If (1) holds for {(\alpha,\beta)}, then (2) holds for {(\alpha+1, \frac{\beta}{2})}.
  • (iii) If (2) holds for {(\gamma,\delta)}, then (1) holds for {(\frac{2\gamma}{3}, \frac{2\delta}{3})}.
  • (iv) If (1) holds for {(\alpha,\beta)} and (2) holds for {(\gamma,\delta)}, then (1) holds for {(\frac{2\alpha+1+\gamma}{4}, \frac{\delta+\beta}{4})}.

Proof: For (i) we simply observe that

\displaystyle \| xyx^{-1} y^{-1} \| = \| (xyx^{-1} y^{-1})^{-1} \| = \| y^{-1} x^{-1} y x \| = \| y x y^{-1} x^{-1} \|.

For (ii), we calculate

\displaystyle \| xyx^{-2}y^{-1} \| = \frac{1}{2}\| (xyx^{-2}y^{-1})^2 \|

\displaystyle = \frac{1}{2} \| (xyx^{-2}y^{-1} x) (yx^{-2} y^{-1}) \|

\displaystyle \leq \frac{1}{2} (\| xyx^{-2}y^{-1} x\| + \|yx^{-2} y^{-1}\|)

\displaystyle \leq \frac{1}{2} ( \| x^2 y x^{-2} y^{-1} \| + 2 \|x\| )

\displaystyle \leq \frac{1}{2} ( 2 \alpha \|x\| + \beta \|y\| + 2 \|x\|)

giving the claim.

For (iii), we calculate

\displaystyle \| xyx^{-1}y^{-1}\| = \frac{1}{3} \| (xyx^{-1}y^{-1})^3 \|

\displaystyle = \frac{1}{3} \| (xyx) (x^{-2} y^{-1} xy) (xyx)^{-1} (x^2 y x^{-1} y^{-1}) \|

\displaystyle \leq \frac{1}{3} ( \| x^{-2} y^{-1} xy\| + \| x^2 y x^{-1} y^{-1}\| )

\displaystyle = \frac{1}{3} ( \| xy x^{-2} y^{-1} \| + \|x^{-1} y^{-1} x^2 y \| )

\displaystyle \leq \frac{1}{3} ( \gamma \|x\| + \delta \|y\| + \gamma \|x\| + \delta \|y\|)

giving the claim.

For (iv), we calculate

\displaystyle \| xyx^{-1}y^{-1}\| = \frac{1}{4} \| (xyx^{-1}y^{-1})^4 \|

\displaystyle = \frac{1}{4} \| (xy) (x^{-1} y^{-1} x) (y x^{-1} y^{-1}) (xyx^{-1}) (xy)^{-1} (x^2yx^{-1}y^{-1}) \|

\displaystyle \leq \frac{1}{4} ( \| (x^{-1} y^{-1} x) (y x^{-1} y^{-1}) (xyx^{-1}) \| + \|x^2yx^{-1}y^{-1}\| )

\displaystyle \leq \frac{1}{4} ( \|(y x^{-1} y^{-1}) (xy^{-1}x^{-1})(x^{-1} y x) \| + \gamma \|x\| + \delta \|y\|)

\displaystyle \leq \frac{1}{4} ( \|x\| + \|(xy^{-1}x^{-1})(x^{-1} y x) \| + \gamma \|x\| + \delta \|y\|)

\displaystyle = \frac{1}{4} ( \|x\| + \|x^{-2} y x^2 y^{-1} \|+ \gamma \|x\| + \delta \|y\|)

\displaystyle \leq \frac{1}{4} ( \|x\| + 2\alpha \|x\| + \beta \|y\| + \gamma \|x\| + \delta \|y\|)

giving the claim. \Box

Here is a typical application of the above estimates. If (1) holds for {(\alpha,\beta)}, then by part (i) it holds for {(\beta,\alpha)}, then by (ii) (2) holds for {(\beta+1,\frac{\alpha}{2})}, then by (iv) (1) holds for {(\frac{3\beta+2}{4}, \frac{3\alpha}{8})}. The map {(\alpha,\beta) \mapsto (\frac{3\beta+2}{4}, \frac{3\alpha}{8})} has fixed point {(\alpha,\beta) = (\frac{16}{23}, \frac{6}{23})}, thus

\displaystyle \| xyx^{-1}y^{-1} \| \leq \frac{16}{23} \|x\| + \frac{6}{23} \|y\|.

For instance, if {\|a\|, \|b\| \leq 1}, then {\|aba^{-1}b^{-1} \| \leq 22/23 = 0.95652\dots}.

Here is a curious question posed to me by Apoorva Khare that I do not know the answer to. Let {F_2} be the free group on two generators {a,b}. Does there exist a metric {d} on this group which is

  • bi-invariant, thus {d(xg,yg)=d(gx,gy) = d(x,y)} for all {x,y,g \in F_2}; and
  • linear growth in the sense that {d(x^n,1) = n d(x,1)} for all {x \in F_2} and all natural numbers {n}?

By defining the “norm” of an element {x \in F_2} to be {\| x\| := d(x,1)}, an equivalent formulation of the problem asks if there exists a non-negative norm function {\| \|: F_2 \rightarrow {\bf R}^+} that obeys the conjugation invariance

\displaystyle  \| gxg^{-1} \| = \|x \| \ \ \ \ \ (1)

for all {x,g \in F_2}, the triangle inequality

\displaystyle  \| xy \| \leq \| x\| + \| y\| \ \ \ \ \ (2)

for all {x,y \in F_2}, and the linear growth

\displaystyle  \| x^n \| = |n| \|x\| \ \ \ \ \ (3)

for all {x \in F_2} and {n \in {\bf Z}}, and such that {\|x\| > 0} for all non-identity {x \in F_2}. Indeed, if such a norm exists then one can just take {d(x,y) := \| x y^{-1} \|} to give the desired metric.

One can normalise the norm of the generators to be at most {1}, thus

\displaystyle  \| a \|, \| b \| \leq 1.

This can then be used to upper bound the norm of other words in {F_2}. For instance, from (1), (3) one has

\displaystyle  \| aba^{-1} \|, \| b^{-1} a b \|, \| a^{-1} b^{-1} a \|, \| bab^{-1}\| \leq 1.

A bit less trivially, from (3), (2), (1) one can bound commutators as

\displaystyle  \| aba^{-1} b^{-1} \| = \frac{1}{3} \| (aba^{-1} b^{-1})^3 \|

\displaystyle  = \frac{1}{3} \| (aba^{-1}) (b^{-1} ab) (a^{-1} b^{-1} a) (b ab^{-1}) \|

\displaystyle  \leq \frac{4}{3}.

In a similar spirit one has

\displaystyle  \| aba^{-2} b^{-1} \| = \frac{1}{2} \| (aba^{-2} b^{-1})^2 \|

\displaystyle  = \frac{1}{2} \| (aba^{-1}) (a^{-1} b^{-1} a) (ba^{-1} b^{-1}) (ba^{-1} b^{-1}) \|

\displaystyle  \leq 2.

What is not clear to me is if one can keep arguing like this to continually improve the upper bounds on the norm {\| g\|} of a given non-trivial group element {g} to the point where this norm must in fact vanish, which would demonstrate that no metric with the above properties on {F_2} would exist (and in fact would impose strong constraints on similar metrics existing on other groups as well). It is also tempting to use some ideas from geometric group theory (e.g. asymptotic cones) to try to understand these metrics further, though I wasn’t able to get very far with this approach. Anyway, this feels like a problem that might be somewhat receptive to a more crowdsourced attack, so I am posing it here in case any readers wish to try to make progress on it.

How many groups of order four are there? Technically, there are an enormous number, so much so, in fact, that the class of groups of order four is not even a set, but merely a proper class. This is because any four objects {a,b,c,d} can be turned into a group {\{a,b,c,d\}} by designating one of the four objects, say {a}, to be the group identity, and imposing a suitable multiplication table (and inversion law) on the four elements in a manner that obeys the usual group axioms. Since all sets are themselves objects, the class of four-element groups is thus at least as large as the class of all sets, which by Russell’s paradox is known not to itself be a set (assuming the usual ZFC axioms of set theory).

A much better question is to ask how many groups of order four there are up to isomorphism, counting each isomorphism class of groups exactly once. Now, as one learns in undergraduate group theory classes, the answer is just “two”: the cyclic group {C_4} and the Klein four-group {C_2 \times C_2}.

More generally, given a class of objects {X} and some equivalence relation {\sim} on {X} (which one should interpret as describing the property of two objects in {X} being “isomorphic”), one can consider the number {|X / \sim|} of objects in {X} “up to isomorphism”, which is simply the cardinality of the collection {X/\sim} of equivalence classes {[x]:=\{y\in X:x \sim {}y \}} of {X}. In the case where {X} is finite, one can express this cardinality by the formula

\displaystyle |X/\sim| = \sum_{x \in X} \frac{1}{|[x]|}, \ \ \ \ \ (1)

thus one counts elements in {X}, weighted by the reciprocal of the number of objects they are isomorphic to.

Example 1 Let {X} be the five-element set {\{-2,-1,0,1,2\}} of integers between {-2} and {2}. Let us say that two elements {x, y} of {X} are isomorphic if they have the same magnitude: {x \sim y \iff |x| = |y|}. Then the quotient space {X/\sim} consists of just three equivalence classes: {\{-2,2\} = [2] = [-2]}, {\{-1,1\} = [-1] = [1]}, and {\{0\} = [0]}. Thus there are three objects in {X} up to isomorphism, and the identity (1) is then just

\displaystyle 3 = \frac{1}{2} + \frac{1}{2} + 1 + \frac{1}{2} + \frac{1}{2}.

Thus, to count elements in {X} up to equivalence, the elements {-2,-1,1,2} are given a weight of {1/2} because they are each isomorphic to two elements in {X}, while the element {0} is given a weight of {1} because it is isomorphic to just one element in {X} (namely, itself).

Given a finite probability set {X}, there is also a natural probability distribution on {X}, namely the uniform distribution, according to which a random variable {\mathbf{x} \in X} is set equal to any given element {x} of {X} with probability {\frac{1}{|X|}}:

\displaystyle {\bf P}( \mathbf{x} = x ) = \frac{1}{|X|}.

Given a notion {\sim} of isomorphism on {X}, one can then define the random equivalence class {[\mathbf{x}] \in X/\sim} that the random element {\mathbf{x}} belongs to. But if the isomorphism classes are unequal in size, we now encounter a biasing effect: even if {\mathbf{x}} was drawn uniformly from {X}, the equivalence class {[\mathbf{x}]} need not be uniformly distributed in {X/\sim}. For instance, in the above example, if {\mathbf{x}} was drawn uniformly from {\{-2,-1,0,1,2\}}, then the equivalence class {[\mathbf{x}]} will not be uniformly distributed in the three-element space {X/\sim}, because it has a {2/5} probability of being equal to the class {\{-2,2\}} or to the class {\{-1,1\}}, and only a {1/5} probability of being equal to the class {\{0\}}.

However, it is possible to remove this bias by changing the weighting in (1), and thus changing the notion of what cardinality means. To do this, we generalise the previous situation. Instead of considering sets {X} with an equivalence relation {\sim} to capture the notion of isomorphism, we instead consider groupoids, which are sets {X} in which there are allowed to be multiple isomorphisms between elements in {X} (and in particular, there are allowed to be multiple automorphisms from an element to itself). More precisely:

Definition 2 A groupoid is a set (or proper class) {X}, together with a (possibly empty) collection {\mathrm{Iso}(x \rightarrow y)} of “isomorphisms” between any pair {x,y} of elements of {X}, and a composition map {f, g \mapsto g \circ f} from isomorphisms {f \in \mathrm{Iso}(x \rightarrow y)}, {g \in \mathrm{Iso}(y \rightarrow z)} to isomorphisms in {\mathrm{Iso}(x \rightarrow z)} for every {x,y,z \in X}, obeying the following group-like axioms:

  • (Identity) For every {x \in X}, there is an identity isomorphism {\mathrm{id}_x \in \mathrm{Iso}(x \rightarrow x)}, such that {f \circ \mathrm{id}_x = \mathrm{id}_y \circ f = f} for all {f \in \mathrm{Iso}(x \rightarrow y)} and {x,y \in X}.
  • (Associativity) If {f \in \mathrm{Iso}(x \rightarrow y)}, {g \in \mathrm{Iso}(y \rightarrow z)}, and {h \in \mathrm{Iso}(z \rightarrow w)} for some {x,y,z,w \in X}, then {h \circ (g \circ f) = (h \circ g) \circ f}.
  • (Inverse) If {f \in \mathrm{Iso}(x \rightarrow y)} for some {x,y \in X}, then there exists an inverse isomorphism {f^{-1} \in \mathrm{Iso}(y \rightarrow x)} such that {f \circ f^{-1} = \mathrm{id}_y} and {f^{-1} \circ f = \mathrm{id}_x}.

We say that two elements {x,y} of a groupoid are isomorphic, and write {x \sim y}, if there is at least one isomorphism from {x} to {y}.

Example 3 Any category gives a groupoid by taking {X} to be the set (or class) of objects, and {\mathrm{Iso}(x \rightarrow y)} to be the collection of invertible morphisms from {x} to {y}. For instance, in the category {\mathbf{Set}} of sets, {\mathrm{Iso}(x \rightarrow y)} would be the collection of bijections from {x} to {y}; in the category {\mathbf{Vec}/k} of linear vector spaces over some given base field {k}, {\mathrm{Iso}(x \rightarrow y)} would be the collection of invertible linear transformations from {x} to {y}; and so forth.

Every set {X} equipped with an equivalence relation {\sim} can be turned into a groupoid by assigning precisely one isomorphism {\iota_{x \rightarrow y}} from {x} to {y} for any pair {x,y \in X} with {x \sim y}, and no isomorphisms from {x} to {y} when {x \not \sim y}, with the groupoid operations of identity, composition, and inverse defined in the only way possible consistent with the axioms. We will call this the simply connected groupoid associated with this equivalence relation. For instance, with {X = \{-2,-1,0,1,2\}} as above, if we turn {X} into a simply connected groupoid, there will be precisely one isomorphism from {2} to {-2}, and also precisely one isomorphism from {2} to {2}, but no isomorphisms from {2} to {-1}, {0}, or {1}.

However, one can also form multiply-connected groupoids in which there can be multiple isomorphisms from one element of {X} to another. For instance, one can view {X = \{-2,-1,0,1,2\}} as a space that is acted on by multiplication by the two-element group {\{-1,+1\}}. This gives rise to two types of isomorphisms, an identity isomorphism {(+1)_x} from {x} to {x} for each {x \in X}, and a negation isomorphism {(-1)_x} from {x} to {-x} for each {x \in X}; in particular, there are two automorphisms of {0} (i.e., isomorphisms from {0} to itself), namely {(+1)_0} and {(-1)_0}, whereas the other four elements of {X} only have a single automorphism (the identity isomorphism). One defines composition, identity, and inverse in this groupoid in the obvious fashion (using the group law of the two-element group {\{-1,+1\}}); for instance, we have {(-1)_{-2} \circ (-1)_2 = (+1)_2}.

For a finite multiply-connected groupoid, it turns out that the natural notion of “cardinality” (or as I prefer to call it, “cardinality up to isomorphism”) is given by the variant

\displaystyle \sum_{x \in X} \frac{1}{|\{ f: f \in \mathrm{Iso}(x \rightarrow y) \hbox{ for some } y\}|}

of (1). That is to say, in the multiply connected case, the denominator is no longer the number of objects {y} isomorphic to {x}, but rather the number of isomorphisms from {x} to other objects {y}. Grouping together all summands coming from a single equivalence class {[x]} in {X/\sim}, we can also write this expression as

\displaystyle \sum_{[x] \in X/\sim} \frac{1}{|\mathrm{Aut}(x)|} \ \ \ \ \ (2)

where {\mathrm{Aut}(x) := \mathrm{Iso}(x \rightarrow x)} is the automorphism group of {x}, that is to say the group of isomorphisms from {x} to itself. (Note that if {x,x'} belong to the same equivalence class {[x]}, then the two groups {\mathrm{Aut}(x)} and {\mathrm{Aut}(x')} will be isomorphic and thus have the same cardinality, and so the above expression is well-defined.

For instance, if we take {X} to be the simply connected groupoid on {\{-2,-1,0,1,2\}}, then the number of elements of {X} up to isomorphism is

\displaystyle \frac{1}{2} + \frac{1}{2} + 1 + \frac{1}{2} + \frac{1}{2} = 1 + 1 + 1 = 3

exactly as before. If however we take the multiply connected groupoid on {\{-2,-1,0,1,2\}}, in which {0} has two automorphisms, the number of elements of {X} up to isomorphism is now the smaller quantity

\displaystyle \frac{1}{2} + \frac{1}{2} + \frac{1}{2} + \frac{1}{2} + \frac{1}{2} = 1 + \frac{1}{2} + 1 = \frac{5}{2};

the equivalence class {[0]} is now counted with weight {1/2} rather than {1} due to the two automorphisms on {0}. Geometrically, one can think of this groupoid as being formed by taking the five-element set {\{-2,-1,0,1,2\}}, and “folding it in half” around the fixed point {0}, giving rise to two “full” quotient points {[1], [2]} and one “half” point {[0]}. More generally, given a finite group {G} acting on a finite set {X}, and forming the associated multiply connected groupoid, the cardinality up to isomorphism of this groupoid will be {|X|/|G|}, since each element {x} of {X} will have {|G|} isomorphisms on it (whether they be to the same element {x}, or to other elements of {X}).

The definition (2) can also make sense for some infinite groupoids; to my knowledge this was first explicitly done in this paper of Baez and Dolan. Consider for instance the category {\mathbf{FinSet}} of finite sets, with isomorphisms given by bijections as in Example 3. Every finite set is isomorphic to {\{1,\dots,n\}} for some natural number {n}, so the equivalence classes of {\mathbf{FinSet}} may be indexed by the natural numbers. The automorphism group {S_n} of {\{1,\dots,n\}} has order {n!}, so the cardinality of {\mathbf{FinSet}} up to isomorphism is

\displaystyle \sum_{n=0}^\infty \frac{1}{n!} = e.

(This fact is sometimes loosely stated as “the number of finite sets is {e}“, but I view this statement as somewhat misleading if the qualifier “up to isomorphism” is not added.) Similarly, when one allows for multiple isomorphisms from a group to itself, the number of groups of order four up to isomorphism is now

\displaystyle \frac{1}{2} + \frac{1}{6} = \frac{2}{3}

because the cyclic group {C_4} has two automorphisms, whereas the Klein four-group {C_2 \times C_2} has six.

In the case that the cardinality of a groupoid {X} up to isomorphism is finite and non-empty, one can now define the notion of a random isomorphism class {[\mathbf{x}]} in {X/\sim} drawn “uniformly up to isomorphism”, by requiring the probability of attaining any given isomorphism class {[x]} to be

\displaystyle {\mathbf P}([\mathbf{x}] = [x]) = \frac{1 / |\mathrm{Aut}(x)|}{\sum_{[y] \in X/\sim} 1/|\mathrm{Aut}(y)|},

thus the probability of being isomorphic to a given element {x} will be inversely proportional to the number of automorphisms that {x} has. For instance, if we take {X} to be the set {\{-2,-1,0,1,2\}} with the simply connected groupoid, {[\mathbf{x}]} will be drawn uniformly from the three available equivalence classes {[0], [1], [2]}, with a {1/3} probability of attaining each; but if instead one uses the multiply connected groupoid coming from the action of {\{-1,+1\}}, and draws {[\mathbf{x}]} uniformly up to isomorphism, then {[1]} and {[2]} will now be selected with probability {2/5} each, and {[0]} will be selected with probability {1/5}. Thus this distribution has accounted for the bias mentioned previously: if a finite group {G} acts on a finite space {X}, and {\mathbf{x}} is drawn uniformly from {X}, then {[\mathbf{x}]} now still be drawn uniformly up to isomorphism from {X/G}, if we use the multiply connected groupoid coming from the {G} action, rather than the simply connected groupoid coming from just the {G}-orbit structure on {X}.

Using the groupoid of finite sets, we see that a finite set chosen uniformly up to isomorphism will have a cardinality that is distributed according to the Poisson distribution of parameter {1}, that is to say it will be of cardinality {n} with probability {\frac{e^{-1}}{n!}}.

One important source of groupoids are the fundamental groupoids {\pi_1(M)} of a manifold {M} (one can also consider more general topological spaces than manifolds, but for simplicity we will restrict this discussion to the manifold case), in which the underlying space is simply {M}, and the isomorphisms from {x} to {y} are the equivalence classes of paths from {x} to {y} up to homotopy; in particular, the automorphism group of any given point is just the fundamental group of {M} at that base point. The equivalence class {[x]} of a point in {M} is then the connected component of {x} in {M}. The cardinality up to isomorphism of the fundamental groupoid is then

\displaystyle \sum_{M' \in \pi_0(M)} \frac{1}{|\pi_1(M')|}

where {\pi_0(M)} is the collection of connected components {M'} of {M}, and {|\pi_1(M')|} is the order of the fundamental group of {M'}. Thus, simply connected components of {M} count for a full unit of cardinality, whereas multiply connected components (which can be viewed as quotients of their simply connected cover by their fundamental group) will count for a fractional unit of cardinality, inversely to the order of their fundamental group.

This notion of cardinality up to isomorphism of a groupoid behaves well with respect to various basic notions. For instance, suppose one has an {n}-fold covering map {\pi: X \rightarrow Y} of one finite groupoid {Y} by another {X}. This means that {\pi} is a functor that is surjective, with all preimages of cardinality {n}, with the property that given any pair {y,y'} in the base space {Y} and any {x} in the preimage {\pi^{-1}(\{y\})} of {y}, every isomorphism {f \in \mathrm{Iso}(y \rightarrow y')} has a unique lift {\tilde f \in \mathrm{Iso}(x \rightarrow x')} from the given initial point {x} (and some {x'} in the preimage of {y'}). Then one can check that the cardinality up to isomorphism of {X} is {n} times the cardinality up to isomorphism of {Y}, which fits well with the geometric picture of {X} as the {n}-fold cover of {Y}. (For instance, if one covers a manifold {M} with finite fundamental group by its universal cover, this is a {|\pi_1(M)|}-fold cover, the base has cardinality {1/|\pi_1(M)|} up to isomorphism, and the universal cover has cardinality one up to isomorphism.) Related to this, if one draws an equivalence class {[\mathrm{x}]} of {X} uniformly up to isomorphism, then {\pi([\mathrm{x}])} will be an equivalence class of {Y} drawn uniformly up to isomorphism also.

Indeed, one can show that this notion of cardinality up to isomorphism for groupoids is uniquely determined by a small number of axioms such as these (similar to the axioms that determine Euler characteristic); see this blog post of Qiaochu Yuan for details.

The probability distributions on isomorphism classes described by the above recipe seem to arise naturally in many applications. For instance, if one draws a profinite abelian group up to isomorphism at random in this fashion (so that each isomorphism class {[G]} of a profinite abelian group {G} occurs with probability inversely proportional to the number of automorphisms of this group), then the resulting distribution is known as the Cohen-Lenstra distribution, and seems to emerge as the natural asymptotic distribution of many randomly generated profinite abelian groups in number theory and combinatorics, such as the class groups of random quadratic fields; see this previous blog post for more discussion. For a simple combinatorial example, the set of fixed points of a random permutation on {n} elements will have a cardinality that converges in distribution to the Poisson distribution of rate {1} (as discussed in this previous post), thus we see that the fixed points of a large random permutation asymptotically are distributed uniformly up to isomorphism. I’ve been told that this notion of cardinality up to isomorphism is also particularly compatible with stacks (which are a good framework to describe such objects as moduli spaces of algebraic varieties up to isomorphism), though I am not sufficiently acquainted with this theory to say much more than this.

Suppose that {A \subset B} are two subgroups of some ambient group {G}, with the index {K := [B:A]} of {A} in {B} being finite. Then {B} is the union of {K} left cosets of {A}, thus {B = SA} for some set {S \subset B} of cardinality {K}. The elements {s} of {S} are not entirely arbitrary with regards to {A}. For instance, if {A} is a normal subgroup of {B}, then for each {s \in S}, the conjugation map {g \mapsto s^{-1} g s} preserves {A}. In particular, if we write {A^s := s^{-1} A s} for the conjugate of {A} by {s}, then

\displaystyle  A = A^s.

Even if {A} is not normal in {B}, it turns out that the conjugation map {g \mapsto s^{-1} g s} approximately preserves {A}, if {K} is bounded. To quantify this, let us call two subgroups {A,B} {K}-commensurate for some {K \geq 1} if one has

\displaystyle  [A : A \cap B], [B : A \cap B] \leq K.

Proposition 1 Let {A \subset B} be groups, with finite index {K = [B:A]}. Then for every {s \in B}, the groups {A} and {A^s} are {K}-commensurate, in fact

\displaystyle  [A : A \cap A^s ] = [A^s : A \cap A^s ] \leq K.

Proof: One can partition {B} into {K} left translates {xA} of {A}, as well as {K} left translates {yA^s} of {A^s}. Combining the partitions, we see that {B} can be partitioned into at most {K^2} non-empty sets of the form {xA \cap yA^s}. Each of these sets is easily seen to be a left translate of the subgroup {A \cap A^s}, thus {[B: A \cap A^s] \leq K^2}. Since

\displaystyle [B: A \cap A^s] = [B:A] [A: A \cap A^s] = [B:A^s] [A^s: A \cap A^s]

and {[B:A] = [B:A^s]=K}, we obtain the claim. \Box

One can replace the inclusion {A \subset B} by commensurability, at the cost of some worsening of the constants:

Corollary 2 Let {A, B} be {K}-commensurate subgroups of {G}. Then for every {s \in B}, the groups {A} and {A^s} are {K^2}-commensurate.

Proof: Applying the previous proposition with {A} replaced by {A \cap B}, we see that for every {s \in B}, {A \cap B} and {(A \cap B)^s} are {K}-commensurate. Since {A \cap B} and {(A \cap B)^s} have index at most {K} in {A} and {A^s} respectively, the claim follows. \Box

It turns out that a similar phenomenon holds for the more general concept of an approximate group, and gives a “classification” of all the approximate groups {B} containing a given approximate group {A} as a “bounded index approximate subgroup”. Recall that a {K}-approximate group {A} in a group {G} for some {K \geq 1} is a symmetric subset of {G} containing the identity, such that the product set {A^2 := \{ a_1 a_2: a_1,a_2 \in A\}} can be covered by at most {K} left translates of {A} (and thus also {K} right translates, by the symmetry of {A}). For simplicity we will restrict attention to finite approximate groups {A} so that we can use their cardinality {A} as a measure of size. We call finite two approximate groups {A,B} {K}-commensurate if one has

\displaystyle  |A^2 \cap B^2| \geq \frac{1}{K} |A|, \frac{1}{K} |B|;

note that this is consistent with the previous notion of commensurability for genuine groups.

Theorem 3 Let {G} be a group, and let {K_1,K_2,K_3 \geq 1} be real numbers. Let {A} be a finite {K_1}-approximate group, and let {B} be a symmetric subset of {G} that contains {A}.

  • (i) If {B} is a {K_2}-approximate group with {|B| \leq K_3 |A|}, then one has {B \subset SA} for some set {S} of cardinality at most {K_1 K_2 K_3}. Furthermore, for each {s \in S}, the approximate groups {A} and {A^s} are {K_1 K_2^5 K_3}-commensurate.
  • (ii) Conversely, if {B \subset SA} for some set {S} of cardinality at most {K_3}, and {A} and {A^s} are {K_2}-commensurate for all {s \in S}, then {|B| \leq K_3 |A|}, and {B} is a {K_1^6 K_2 K_3^2}-approximate group.

Informally, the assertion that {B} is an approximate group containing {A} as a “bounded index approximate subgroup” is equivalent to {B} being covered by a bounded number of shifts {sA} of {A}, where {s} approximately normalises {A^2} in the sense that {A^2} and {(A^2)^s} have large intersection. Thus, to classify all such {B}, the problem essentially reduces to that of classifying those {s} that approximately normalise {A^2}.

To prove the theorem, we recall some standard lemmas from arithmetic combinatorics, which are the foundation stones of the “Ruzsa calculus” that we will use to establish our results:

Lemma 4 (Ruzsa covering lemma) If {A} and {B} are finite non-empty subsets of {G}, then one has {B \subset SAA^{-1}} for some set {S \subset B} with cardinality {|S| \leq \frac{|BA|}{|A|}}.

Proof: We take {S} to be a subset of {B} with the property that the translates {sA, s \in S} are disjoint in {BA}, and such that {S} is maximal with respect to set inclusion. The required properties of {S} are then easily verified. \Box

Lemma 5 (Ruzsa triangle inequality) If {A,B,C} are finite non-empty subsets of {G}, then

\displaystyle  |A C^{-1}| \leq |A B^{-1}| |B C^{-1}| / |B|.

Proof: If {ac^{-1}} is an element of {AC^{-1}} with {a \in A} and {c \in C}, then from the identity {ac^{-1} = (ab^{-1}) (bc^{-1})} we see that {ac^{-1}} can be written as the product of an element of {AB^{-1}} and an element of {BC^{-1}} in at least {|B|} distinct ways. The claim follows. \Box

Now we can prove (i). By the Ruzsa covering lemma, {B} can be covered by at most

\displaystyle \frac{|BA|}{|A|} \leq \frac{|B^2|}{|A|} \leq \frac{K_2 |B|}{|A|} \leq K_2 K_3

left-translates of {A^2}, and hence by at most {K_1 K_2 K_3} left-translates of {A}, thus {B \subset SA} for some {|S| \leq K_1 K_2 K_3}. Since {sA} only intersects {B} if {s \in BA}, we may assume that

\displaystyle  S \subset BA \subset B^2

and hence for any {s \in S}

\displaystyle  |A^s A| \leq |B^2 A B^2 A| \leq |B^6|

\displaystyle  \leq K_2^5 |B| \leq K_2^5 K_3 |A|.

By the Ruzsa covering lemma again, this implies that {A^s} can be covered by at most {K_2^5 K_3} left-translates of {A^2}, and hence by at most {K_1 K_2^5 K_3} left-translates of {A}. By the pigeonhole principle, there thus exists a group element {g} with

\displaystyle  |A^s \cap gA| \geq \frac{1}{K_1 K_2^5 K_3} |A|.

Since

\displaystyle  |A^s \cap gA| \leq | (A^s \cap gA)^{-1} (A^s \cap gA)|

and

\displaystyle  (A^s \cap gA)^{-1} (A^s \cap gA) \subset A^2 \cap (A^s)^2

the claim follows.

Now we prove (ii). Clearly

\displaystyle  |B| \leq |S| |A| \leq K_3 |A|.

Now we control the size of {B^2 A}. We have

\displaystyle  |B^2 A| \leq |SA SA^2| \leq K_3^2 \sup_{s \in S} |A s A^2| = K_3^2 \sup_{s \in S} |A^s A^2|.

From the Ruzsa triangle inequality and symmetry we have

\displaystyle  |A^s A^2| \leq \frac{ |A^s (A^2 \cap (A^2)^s)| |(A^2 \cap (A^2)^s) A^2|}{|A^2 \cap (A^2)^s|}

\displaystyle  \leq \frac{ |(A^3)^s| |A^4| }{|A|/K_2}

\displaystyle  \leq K_2 \frac{ |A^3| |A^4| }{|A|}

\displaystyle  \leq K_1^5 K_2 |A|

and thus

\displaystyle  |B^2 A| \leq K_1^5 K_2 K_3^2 |A|.

By the Ruzsa covering lemma, this implies that {B^2} is covered by at most {K_1^5 K_2 K_3^2} left-translates of {A^2}, hence by at most {K_1^6 K_2 K_3^2} left-translates of {A}. Since {A \subset B}, the claim follows.

We now establish some auxiliary propositions about commensurability of approximate groups. The first claim is that commensurability is approximately transitive:

Proposition 6 Let {A} be a {K_1}-approximate group, {B} be a {K_2}-approximate group, and {C} be a {K_3}-approximate group. If {A} and {B} are {K_4}-commensurate, and {B} and {C} are {K_5}-commensurate, then {A} and {C} are {K_1^2 K_2^3 K_2^3 K_4 K_5 \max(K_1,K_3)}-commensurate.

Proof: From two applications of the Ruzsa triangle inequality we have

\displaystyle  |AC| \leq \frac{|A (A^2 \cap B^2)| |(A^2 \cap B^2) (B^2 \cap C^2)| |(B^2 \cap C^2) C|}{|A^2 \cap B^2| |B^2 \cap C^2|}

\displaystyle  \leq \frac{|A^3| |B^4| |C^3|}{ (|A|/K_4) (|B|/K_5)}

\displaystyle  \leq K_4 K_5 \frac{K_1^2 |A| K_2^3 |B| K_3^2 |C|}{ |A| |B| }

\displaystyle  = K_1^2 K_2^3 K_3^2 K_4 K_5 |C|.

By the Ruzsa covering lemma, we may thus cover {A} by at most {K_1^2 K_2^3 K_3^2 K_4 K_5} left-translates of {C^2}, and hence by {K_1^2 K_2^3 K_3^3 K_4 K_5} left-translates of {C}. By the pigeonhole principle, there thus exists a group element {g} such that

\displaystyle  |A \cap gC| \geq \frac{1}{K_1^2 K_2^3 K_3^3 K_4 K_5} |A|,

and so by arguing as in the proof of part (i) of the theorem we have

\displaystyle  |A^2 \cap C^2| \geq \frac{1}{K_1^2 K_2^3 K_3^3 K_4 K_5} |A|

and similarly

\displaystyle  |A^2 \cap C^2| \geq \frac{1}{K_1^3 K_2^3 K_3^2 K_4 K_5} |C|

and the claim follows. \Box

The next proposition asserts that the union and (modified) intersection of two commensurate approximate groups is again an approximate group:

Proposition 7 Let {A} be a {K_1}-approximate group, {B} be a {K_2}-approximate group, and suppose that {A} and {B} are {K_3}-commensurate. Then {A \cup B} is a {K_1 + K_2 + K_1^2 K_2^4 K_3 + K_1^4 K_2^2 K_3}-approximate subgroup, and {A^2 \cap B^2} is a {K_1^6 K_2^3 K_3}-approximate subgroup.

Using this proposition, one may obtain a variant of the previous theorem where the containment {A \subset B} is replaced by commensurability; we leave the details to the interested reader.

Proof: We begin with {A \cup B}. Clearly {A \cup B} is symmetric and contains the identity. We have {(A \cup B)^2 = A^2 \cup AB \cup BA \cup B^2}. The set {A^2} is already covered by {K_1} left translates of {A}, and hence of {A \cup B}; similarly {B^2} is covered by {K_2} left translates of {A \cup B}. As for {AB}, we observe from the Ruzsa triangle inequality that

\displaystyle  |AB^2| \leq \frac{|A (A^2 \cap B^2)| |(A^2 \cap B^2) B^2|}{|A^2 \cap B^2|}

\displaystyle  \leq \frac{|A^3| |B^4|}{|A|/K_3}

\displaystyle  \leq K_1^2 K_2^3 K_3 |B|

and hence by the Ruzsa covering lemma, {AB} is covered by at most {K_1^2 K_2^3 K_3} left translates of {B^2}, and hence by {K_1^2 K_2^4 K_3} left translates of {B}, and hence of {A \cup B}. Similarly {BA} is covered by at most {K_1^4 K_2^2 K_3} left translates of {B}. The claim follows.

Now we consider {A^2 \cap B^2}. Again, this is clearly symmetric and contains the identity. Repeating the previous arguments, we see that {A} is covered by at most {K_1^2 K_2^3 K_3} left-translates of {B}, and hence there exists a group element {g} with

\displaystyle  |A \cap gB| \geq \frac{1}{K_1^2 K_2^3 K_3} |A|.

Now observe that

\displaystyle  |(A^2 \cap B^2)^2 (A \cap gB)| \leq |A^5| \leq K_1^4 |A|

and so by the Ruzsa covering lemma, {(A^2 \cap B^2)^2} can be covered by at most {K_1^6 K_2^3 K_3} left-translates of {(A \cap gB) (A \cap gB)^{-1}}. But this latter set is (as observed previously) contained in {A^2 \cap B^2}, and the claim follows. \Box

Because of Euler’s identity {e^{\pi i} + 1 = 0}, the complex exponential is not injective: {e^{z + 2\pi i k} = e^z} for any complex {z} and integer {k}. As such, the complex logarithm {z \mapsto \log z} is not well-defined as a single-valued function from {{\bf C} \backslash \{0\}} to {{\bf C}}. However, after making a branch cut, one can create a branch of the logarithm which is single-valued. For instance, after removing the negative real axis {(-\infty,0]}, one has the standard branch {\hbox{Log}: {\bf C} \backslash (-\infty,0] \rightarrow \{ z \in {\bf C}: |\hbox{Im} z| < \pi \}} of the logarithm, with {\hbox{Log}(z)} defined as the unique choice of the complex logarithm of {z} whose imaginary part has magnitude strictly less than {\pi}. This particular branch has a number of useful additional properties:

  • The standard branch {\hbox{Log}} is holomorphic on its domain {{\bf C} \backslash (-\infty,0]}.
  • One has {\hbox{Log}( \overline{z} ) = \overline{ \hbox{Log}(z) }} for all {z} in the domain {{\bf C} \backslash (-\infty,0]}. In particular, if {z \in {\bf C} \backslash (-\infty,0]} is real, then {\hbox{Log} z} is real.
  • One has {\hbox{Log}( z^{-1} ) = - \hbox{Log}(z)} for all {z} in the domain {{\bf C} \backslash (-\infty,0]}.

One can then also use the standard branch of the logarithm to create standard branches of other multi-valued functions, for instance creating a standard branch {z \mapsto \exp( \frac{1}{2} \hbox{Log} z )} of the square root function. We caution however that the identity {\hbox{Log}(zw) = \hbox{Log}(z) + \hbox{Log}(w)} can fail for the standard branch (or indeed for any branch of the logarithm).

One can extend this standard branch of the logarithm to {n \times n} complex matrices, or (equivalently) to linear transformations {T: V \rightarrow V} on an {n}-dimensional complex vector space {V}, provided that the spectrum of that matrix or transformation avoids the branch cut {(-\infty,0]}. Indeed, from the spectral theorem one can decompose any such {T: V \rightarrow V} as the direct sum of operators {T_\lambda: V_\lambda \rightarrow V_\lambda} on the non-trivial generalised eigenspaces {V_\lambda} of {T}, where {\lambda \in {\bf C} \backslash (-\infty,0]} ranges in the spectrum of {T}. For each component {T_\lambda} of {T}, we define

\displaystyle  \hbox{Log}( T_\lambda ) = P_\lambda( T_\lambda )

where {P_\lambda} is the Taylor expansion of {\hbox{Log}} at {\lambda}; as {T_\lambda-\lambda} is nilpotent, only finitely many terms in this Taylor expansion are required. The logarithm {\hbox{Log} T} is then defined as the direct sum of the {\hbox{Log} T_\lambda}.

The matrix standard branch of the logarithm has many pleasant and easily verified properties (often inherited from their scalar counterparts), whenever {T: V \rightarrow V} has no spectrum in {(-\infty,0]}:

  • (i) We have {\exp( \hbox{Log} T ) = T}.
  • (ii) If {T_1: V_1 \rightarrow V_1} and {T_2: V_2 \rightarrow V_2} have no spectrum in {(-\infty,0]}, then {\hbox{Log}( T_1 \oplus T_2 ) = \hbox{Log}(T_1) \oplus \hbox{Log}(T_2)}.
  • (iii) If {T} has spectrum in a closed disk {B(z,r)} in {{\bf C} \backslash (-\infty,0]}, then {\hbox{Log}(T) = P_z(T)}, where {P_z} is the Taylor series of {\hbox{Log}} around {z} (which is absolutely convergent in {B(z,r)}).
  • (iv) {\hbox{Log}(T)} depends holomorphically on {T}. (Easily established from (ii), (iii), after covering the spectrum of {T} by disjoint disks; alternatively, one can use the Cauchy integral representation {\hbox{Log}(T) = \frac{1}{2\pi i} \int_\gamma \hbox{Log}(z)(z-T)^{-1}\ dz} for a contour {\gamma} in the domain enclosing the spectrum of {T}.) In particular, the standard branch of the matrix logarithm is smooth.
  • (v) If {S: V \rightarrow W} is any invertible linear or antilinear map, then {\hbox{Log}( STS^{-1} ) = S \hbox{Log}(T) S^{-1}}. In particular, the standard branch of the logarithm commutes with matrix conjugations; and if {T} is real with respect to a complex conjugation operation on {V} (that is to say, an antilinear involution), then {\hbox{Log}(T)} is real also.
  • (vi) If {T^*: V^* \rightarrow V^*} denotes the transpose of {T} (with {V^*} the complex dual of {V}), then {\hbox{Log}(T^*) = \hbox{Log}(T)^*}. Similarly, if {T^\dagger: V^\dagger \rightarrow V^\dagger} denotes the adjoint of {T} (with {V^\dagger} the complex conjugate of {V^*}, i.e. {V^*} with the conjugated multiplication map {(c,z) \mapsto \overline{c} z}), then {\hbox{Log}(T^\dagger) = \hbox{Log}(T)^\dagger}.
  • (vii) One has {\hbox{Log}(T^{-1}) = - \hbox{Log}( T )}.
  • (viii) If {\sigma(T)} denotes the spectrum of {T}, then {\sigma(\hbox{Log} T) = \hbox{Log} \sigma(T)}.

As a quick application of the standard branch of the matrix logarithm, we have

Proposition 1 Let {G} be one of the following matrix groups: {GL_n({\bf C})}, {GL_n({\bf R})}, {U_n({\bf C})}, {O(Q)}, {Sp_{2n}({\bf C})}, or {Sp_{2n}({\bf R})}, where {Q: {\bf R}^n \rightarrow {\bf R}} is a non-degenerate real quadratic form (so {O(Q)} is isomorphic to a (possibly indefinite) orthogonal group {O(k,n-k)} for some {0 \leq k \leq n}. Then any element {T} of {G} whose spectrum avoids {(-\infty,0]} is exponential, that is to say {T = \exp(X)} for some {X} in the Lie algebra {{\mathfrak g}} of {G}.

Proof: We just prove this for {G=O(Q)}, as the other cases are similar (or a bit simpler). If {T \in O(Q)}, then (viewing {T} as a complex-linear map on {{\bf C}^n}, and using the complex bilinear form associated to {Q} to identify {{\bf C}^n} with its complex dual {({\bf C}^n)^*}, then {T} is real and {T^{*-1} = T}. By the properties (v), (vi), (vii) of the standard branch of the matrix logarithm, we conclude that {\hbox{Log} T} is real and {- \hbox{Log}(T)^* = \hbox{Log}(T)}, and so {\hbox{Log}(T)} lies in the Lie algebra {{\mathfrak g} = {\mathfrak o}(Q)}, and the claim now follows from (i). \Box

Exercise 2 Show that {\hbox{diag}(-\lambda, -1/\lambda)} is not exponential in {GL_2({\bf R})} if {-\lambda \in (-\infty,0) \backslash \{-1\}}. Thus we see that the branch cut in the above proposition is largely necessary. See this paper of Djokovic for a more complete description of the image of the exponential map in classical groups, as well as this previous blog post for some more discussion of the surjectivity (or lack thereof) of the exponential map in Lie groups.

For a slightly less quick application of the standard branch, we have the following result (recently worked out in the answers to this MathOverflow question):

Proposition 3 Let {T} be an element of the split orthogonal group {O(n,n)} which lies in the connected component of the identity. Then {\hbox{det}(1+T) \geq 0}.

The requirement that {T} lie in the identity component is necessary, as the counterexample {T = \hbox{diag}(-\lambda, -1/\lambda )} for {\lambda \in (-\infty,-1) \cup (-1,0)} shows.

Proof: We think of {T} as a (real) linear transformation on {{\bf C}^{2n}}, and write {Q} for the quadratic form associated to {O(n,n)}, so that {O(n,n) \equiv O(Q)}. We can split {{\bf C}^{2n} = V_1 \oplus V_2}, where {V_1} is the sum of all the generalised eigenspaces corresponding to eigenvalues in {(-\infty,0]}, and {V_2} is the sum of all the remaining eigenspaces. Since {T} and {(-\infty,0]} are real, {V_1,V_2} are real (i.e. complex-conjugation invariant) also. For {i=1,2}, the restriction {T_i: V_i \rightarrow V_i} of {T} to {V_i} then lies in {O(Q_i)}, where {Q_i} is the restriction of {Q} to {V_i}, and

\displaystyle  \hbox{det}(1+T) = \hbox{det}(1+T_1) \hbox{det}(1+T_2).

The spectrum of {T_2} consists of positive reals, as well as complex pairs {\lambda, \overline{\lambda}} (with equal multiplicity), so {\hbox{det}(1+T_2) > 0}. From the preceding proposition we have {T_2 = \exp( X_2 )} for some {X_2 \in {\mathfrak o}(Q_2)}; this will be important later.

It remains to show that {\hbox{det}(1+T_1) \geq 0}. If {T_1} has spectrum at {-1} then we are done, so we may assume that {T_1} has spectrum only at {(-\infty,-1) \cup (-1,0)} (being invertible, {T} has no spectrum at {0}). We split {V_1 = V_3 \oplus V_4}, where {V_3,V_4} correspond to the portions of the spectrum in {(-\infty,-1)}, {(-1,0)}; these are real, {T}-invariant spaces. We observe that if {V_\lambda, V_\mu} are generalised eigenspaces of {T} with {\lambda \mu \neq 1}, then {V_\lambda, V_\mu} are orthogonal with respect to the (complex-bilinear) inner product {\cdot} associated with {Q}; this is easiest to see first for the actual eigenspaces (since { \lambda \mu u \cdot v = Tu \cdot Tv = u \cdot v} for all {u \in V_\lambda, v \in V_\mu}), and the extension to generalised eigenvectors then follows from a routine induction. From this we see that {V_1} is orthogonal to {V_2}, and {V_3} and {V_4} are null spaces, which by the non-degeneracy of {Q} (and hence of the restriction {Q_1} of {Q} to {V_1}) forces {V_3} to have the same dimension as {V_4}, indeed {Q} now gives an identification of {V_3^*} with {V_4}. If we let {T_3, T_4} be the restrictions of {T} to {V_3,V_4}, we thus identify {T_4} with {T_3^{*-1}}, since {T} lies in {O(Q)}; in particular {T_3} is invertible. Thus

\displaystyle  \hbox{det}(1+T_1) = \hbox{det}(1 + T_3) \hbox{det}( 1 + T_3^{*-1} ) = \hbox{det}(T_3)^{-1} \hbox{det}(1+T_3)^2

and so it suffices to show that {\hbox{det}(T_3) > 0}.

At this point we need to use the hypothesis that {T} lies in the identity component of {O(n,n)}. This implies (by a continuity argument) that the restriction of {T} to any maximal-dimensional positive subspace has positive determinant (since such a restriction cannot be singular, as this would mean that {T} positive norm vector would map to a non-positive norm vector). Now, as {V_3,V_4} have equal dimension, {Q_1} has a balanced signature, so {Q_2} does also. Since {T_2 = \exp(X_2)}, {T_2} already lies in the identity component of {O(Q_2)}, and so has positive determinant on any maximal-dimensional positive subspace of {V_2}. We conclude that {T_1} has positive determinant on any maximal-dimensional positive subspace of {V_1}.

We choose a complex basis of {V_3}, to identify {V_3} with {V_3^*}, which has already been identified with {V_4}. (In coordinates, {V_3,V_4} are now both of the form {{\bf C}^m}, and {Q( v \oplus w ) = v \cdot w} for {v,w \in {\bf C}^m}.) Then {\{ v \oplus v: v \in V_3 \}} becomes a maximal positive subspace of {V_1}, and the restriction of {T_1} to this subspace is conjugate to {T_3 + T_3^{*-1}}, so that

\displaystyle  \hbox{det}( T_3 + T_3^{*-1} ) > 0.

But since {\hbox{det}( T_3 + T_3^{*-1} ) = \hbox{det}(T_3) \hbox{det}( 1 + T_3^{-1} T_3^{*-1} )} and { 1 + T_3^{-1} T_3^{*-1}} is positive definite, so {\hbox{det}(T_3)>0} as required. \Box

In graph theory, the recently developed theory of graph limits has proven to be a useful tool for analysing large dense graphs, being a convenient reformulation of the Szemerédi regularity lemma. Roughly speaking, the theory asserts that given any sequence {G_n = (V_n, E_n)} of finite graphs, one can extract a subsequence {G_{n_j} = (V_{n_j}, E_{n_j})} which converges (in a specific sense) to a continuous object known as a “graphon” – a symmetric measurable function {p\colon [0,1] \times [0,1] \rightarrow [0,1]}. What “converges” means in this context is that subgraph densities converge to the associated integrals of the graphon {p}. For instance, the edge density

\displaystyle  \frac{1}{|V_{n_j}|^2} |E_{n_j}|

converge to the integral

\displaystyle  \int_0^1 \int_0^1 p(x,y)\ dx dy,

the triangle density

\displaystyle  \frac{1}{|V_{n_j}|^3} \lvert \{ (v_1,v_2,v_3) \in V_{n_j}^3: \{v_1,v_2\}, \{v_2,v_3\}, \{v_3,v_1\} \in E_{n_j} \} \rvert

converges to the integral

\displaystyle  \int_0^1 \int_0^1 \int_0^1 p(x_1,x_2) p(x_2,x_3) p(x_3,x_1)\ dx_1 dx_2 dx_3,

the four-cycle density

\displaystyle  \frac{1}{|V_{n_j}|^4} \lvert \{ (v_1,v_2,v_3,v_4) \in V_{n_j}^4: \{v_1,v_2\}, \{v_2,v_3\}, \{v_3,v_4\}, \{v_4,v_1\} \in E_{n_j} \} \rvert

converges to the integral

\displaystyle  \int_0^1 \int_0^1 \int_0^1 \int_0^1 p(x_1,x_2) p(x_2,x_3) p(x_3,x_4) p(x_4,x_1)\ dx_1 dx_2 dx_3 dx_4,

and so forth. One can use graph limits to prove many results in graph theory that were traditionally proven using the regularity lemma, such as the triangle removal lemma, and can also reduce many asymptotic graph theory problems to continuous problems involving multilinear integrals (although the latter problems are not necessarily easy to solve!). See this text of Lovasz for a detailed study of graph limits and their applications.

One can also express graph limits (and more generally hypergraph limits) in the language of nonstandard analysis (or of ultraproducts); see for instance this paper of Elek and Szegedy, Section 6 of this previous blog post, or this paper of Towsner. (In this post we assume some familiarity with nonstandard analysis, as reviewed for instance in the previous blog post.) Here, one starts as before with a sequence {G_n = (V_n,E_n)} of finite graphs, and then takes an ultraproduct (with respect to some arbitrarily chosen non-principal ultrafilter {\alpha \in\beta {\bf N} \backslash {\bf N}}) to obtain a nonstandard graph {G_\alpha = (V_\alpha,E_\alpha)}, where {V_\alpha = \prod_{n\rightarrow \alpha} V_n} is the ultraproduct of the {V_n}, and similarly for the {E_\alpha}. The set {E_\alpha} can then be viewed as a symmetric subset of {V_\alpha \times V_\alpha} which is measurable with respect to the Loeb {\sigma}-algebra {{\mathcal L}_{V_\alpha \times V_\alpha}} of the product {V_\alpha \times V_\alpha} (see this previous blog post for the construction of Loeb measure). A crucial point is that this {\sigma}-algebra is larger than the product {{\mathcal L}_{V_\alpha} \times {\mathcal L}_{V_\alpha}} of the Loeb {\sigma}-algebra of the individual vertex set {V_\alpha}. This leads to a decomposition

\displaystyle  1_{E_\alpha} = p + e

where the “graphon” {p} is the orthogonal projection of {1_{E_\alpha}} onto {L^2( {\mathcal L}_{V_\alpha} \times {\mathcal L}_{V_\alpha} )}, and the “regular error” {e} is orthogonal to all product sets {A \times B} for {A, B \in {\mathcal L}_{V_\alpha}}. The graphon {p\colon V_\alpha \times V_\alpha \rightarrow [0,1]} then captures the statistics of the nonstandard graph {G_\alpha}, in exact analogy with the more traditional graph limits: for instance, the edge density

\displaystyle  \hbox{st} \frac{1}{|V_\alpha|^2} |E_\alpha|

(or equivalently, the limit of the {\frac{1}{|V_n|^2} |E_n|} along the ultrafilter {\alpha}) is equal to the integral

\displaystyle  \int_{V_\alpha} \int_{V_\alpha} p(x,y)\ d\mu_{V_\alpha}(x) d\mu_{V_\alpha}(y)

where {d\mu_V} denotes Loeb measure on a nonstandard finite set {V}; the triangle density

\displaystyle  \hbox{st} \frac{1}{|V_\alpha|^3} \lvert \{ (v_1,v_2,v_3) \in V_\alpha^3: \{v_1,v_2\}, \{v_2,v_3\}, \{v_3,v_1\} \in E_\alpha \} \rvert

(or equivalently, the limit along {\alpha} of the triangle densities of {E_n}) is equal to the integral

\displaystyle  \int_{V_\alpha} \int_{V_\alpha} \int_{V_\alpha} p(x_1,x_2) p(x_2,x_3) p(x_3,x_1)\ d\mu_{V_\alpha}(x_1) d\mu_{V_\alpha}(x_2) d\mu_{V_\alpha}(x_3),

and so forth. Note that with this construction, the graphon {p} is living on the Cartesian square of an abstract probability space {V_\alpha}, which is likely to be inseparable; but it is possible to cut down the Loeb {\sigma}-algebra on {V_\alpha} to minimal countable {\sigma}-algebra for which {p} remains measurable (up to null sets), and then one can identify {V_\alpha} with {[0,1]}, bringing this construction of a graphon in line with the traditional notion of a graphon. (See Remark 5 of this previous blog post for more discussion of this point.)

Additive combinatorics, which studies things like the additive structure of finite subsets {A} of an abelian group {G = (G,+)}, has many analogies and connections with asymptotic graph theory; in particular, there is the arithmetic regularity lemma of Green which is analogous to the graph regularity lemma of Szemerédi. (There is also a higher order arithmetic regularity lemma analogous to hypergraph regularity lemmas, but this is not the focus of the discussion here.) Given this, it is natural to suspect that there is a theory of “additive limits” for large additive sets of bounded doubling, analogous to the theory of graph limits for large dense graphs. The purpose of this post is to record a candidate for such an additive limit. This limit can be used as a substitute for the arithmetic regularity lemma in certain results in additive combinatorics, at least if one is willing to settle for qualitative results rather than quantitative ones; I give a few examples of this below the fold.

It seems that to allow for the most flexible and powerful manifestation of this theory, it is convenient to use the nonstandard formulation (among other things, it allows for full use of the transfer principle, whereas a more traditional limit formulation would only allow for a transfer of those quantities continuous with respect to the notion of convergence). Here, the analogue of a nonstandard graph is an ultra approximate group {A_\alpha} in a nonstandard group {G_\alpha = \prod_{n \rightarrow \alpha} G_n}, defined as the ultraproduct of finite {K}-approximate groups {A_n \subset G_n} for some standard {K}. (A {K}-approximate group {A_n} is a symmetric set containing the origin such that {A_n+A_n} can be covered by {K} or fewer translates of {A_n}.) We then let {O(A_\alpha)} be the external subgroup of {G_\alpha} generated by {A_\alpha}; equivalently, {A_\alpha} is the union of {A_\alpha^m} over all standard {m}. This space has a Loeb measure {\mu_{O(A_\alpha)}}, defined by setting

\displaystyle \mu_{O(A_\alpha)}(E_\alpha) := \hbox{st} \frac{|E_\alpha|}{|A_\alpha|}

whenever {E_\alpha} is an internal subset of {A_\alpha^m} for any standard {m}, and extended to a countably additive measure; the arguments in Section 6 of this previous blog post can be easily modified to give a construction of this measure.

The Loeb measure {\mu_{O(A_\alpha)}} is a translation invariant measure on {O(A_{\alpha})}, normalised so that {A_\alpha} has Loeb measure one. As such, one should think of {O(A_\alpha)} as being analogous to a locally compact abelian group equipped with a Haar measure. It should be noted though that {O(A_\alpha)} is not actually a locally compact group with Haar measure, for two reasons:

  • There is not an obvious topology on {O(A_\alpha)} that makes it simultaneously locally compact, Hausdorff, and {\sigma}-compact. (One can get one or two out of three without difficulty, though.)
  • The addition operation {+\colon O(A_\alpha) \times O(A_\alpha) \rightarrow O(A_\alpha)} is not measurable from the product Loeb algebra {{\mathcal L}_{O(A_\alpha)} \times {\mathcal L}_{O(A_\alpha)}} to {{\mathcal L}_{O(\alpha)}}. Instead, it is measurable from the coarser Loeb algebra {{\mathcal L}_{O(A_\alpha) \times O(A_\alpha)}} to {{\mathcal L}_{O(\alpha)}} (compare with the analogous situation for nonstandard graphs).

Nevertheless, the analogy is a useful guide for the arguments that follow.

Let {L(O(A_\alpha))} denote the space of bounded Loeb measurable functions {f\colon O(A_\alpha) \rightarrow {\bf C}} (modulo almost everywhere equivalence) that are supported on {A_\alpha^m} for some standard {m}; this is a complex algebra with respect to pointwise multiplication. There is also a convolution operation {\star\colon L(O(A_\alpha)) \times L(O(A_\alpha)) \rightarrow L(O(A_\alpha))}, defined by setting

\displaystyle  \hbox{st} f \star \hbox{st} g(x) := \hbox{st} \frac{1}{|A_\alpha|} \sum_{y \in A_\alpha^m} f(y) g(x-y)

whenever {f\colon A_\alpha^m \rightarrow {}^* {\bf C}}, {g\colon A_\alpha^l \rightarrow {}^* {\bf C}} are bounded nonstandard functions (extended by zero to all of {O(A_\alpha)}), and then extending to arbitrary elements of {L(O(A_\alpha))} by density. Equivalently, {f \star g} is the pushforward of the {{\mathcal L}_{O(A_\alpha) \times O(A_\alpha)}}-measurable function {(x,y) \mapsto f(x) g(y)} under the map {(x,y) \mapsto x+y}.

The basic structural theorem is then as follows.

Theorem 1 (Kronecker factor) Let {A_\alpha} be an ultra approximate group. Then there exists a (standard) locally compact abelian group {G} of the form

\displaystyle  G = {\bf R}^d \times {\bf Z}^m \times T

for some standard {d,m} and some compact abelian group {T}, equipped with a Haar measure {\mu_G} and a measurable homomorphism {\pi\colon O(A_\alpha) \rightarrow G} (using the Loeb {\sigma}-algebra on {O(A_\alpha)} and the Baire {\sigma}-algebra on {G}), with the following properties:

  • (i) {\pi} has dense image, and {\mu_G} is the pushforward of Loeb measure {\mu_{O(A_\alpha)}} by {\pi}.
  • (ii) There exists sets {\{0\} \subset U_0 \subset K_0 \subset G} with {U_0} open and {K_0} compact, such that

    \displaystyle  \pi^{-1}(U_0) \subset 4A_\alpha \subset \pi^{-1}(K_0). \ \ \ \ \ (1)

  • (iii) Whenever {K \subset U \subset G} with {K} compact and {U} open, there exists a nonstandard finite set {B} such that

    \displaystyle  \pi^{-1}(K) \subset B \subset \pi^{-1}(U). \ \ \ \ \ (2)

  • (iv) If {f, g \in L}, then we have the convolution formula

    \displaystyle  f \star g = \pi^*( (\pi_* f) \star (\pi_* g) ) \ \ \ \ \ (3)

    where {\pi_* f,\pi_* g} are the pushforwards of {f,g} to {L^2(G, \mu_G)}, the convolution {\star} on the right-hand side is convolution using {\mu_G}, and {\pi^*} is the pullback map from {L^2(G,\mu_G)} to {L^2(O(A_\alpha), \mu_{O(A_\alpha)})}. In particular, if {\pi_* f = 0}, then {f*g=0} for all {g \in L}.

One can view the locally compact abelian group {G} as a “model “or “Kronecker factor” for the ultra approximate group {A_\alpha} (in close analogy with the Kronecker factor from ergodic theory). In the case that {A_\alpha} is a genuine nonstandard finite group rather than an ultra approximate group, the non-compact components {{\bf R}^d \times {\bf Z}^m} of the Kronecker group {G} are trivial, and this theorem was implicitly established by Szegedy. The compact group {T} is quite large, and in particular is likely to be inseparable; but as with the case of graphons, when one is only studying at most countably many functions {f}, one can cut down the size of this group to be separable (or equivalently, second countable or metrisable) if desired, so one often works with a “reduced Kronecker factor” which is a quotient of the full Kronecker factor {G}. Once one is in the separable case, the Baire sigma algebra is identical with the more familiar Borel sigma algebra.

Given any sequence of uniformly bounded functions {f_n\colon A_n^m \rightarrow {\bf C}} for some fixed {m}, we can view the function {f \in L} defined by

\displaystyle  f := \pi_* \hbox{st} \lim_{n \rightarrow \alpha} f_n \ \ \ \ \ (4)

as an “additive limit” of the {f_n}, in much the same way that graphons {p\colon V_\alpha \times V_\alpha \rightarrow [0,1]} are limits of the indicator functions {1_{E_n}\colon V_n \times V_n \rightarrow \{0,1\}}. The additive limits capture some of the statistics of the {f_n}, for instance the normalised means

\displaystyle  \frac{1}{|A_n|} \sum_{x \in A_n^m} f_n(x)

converge (along the ultrafilter {\alpha}) to the mean

\displaystyle  \int_G f(x)\ d\mu_G(x),

and for three sequences {f_n,g_n,h_n\colon A_n^m \rightarrow {\bf C}} of functions, the normalised correlation

\displaystyle  \frac{1}{|A_n|^2} \sum_{x,y \in A_n^m} f_n(x) g_n(y) h_n(x+y)

converges along {\alpha} to the correlation

\displaystyle  \int_G \int_G f(x) g(y) h(x+y)\ d\mu_G(x) d\mu_G(y),

the normalised {U^2} Gowers norm

\displaystyle  ( \frac{1}{|A_n|^3} \sum_{x,y,z,w \in A_n^m: x+w=y+z} f_n(x) \overline{f_n(y)} \overline{f_n(z)} f_n(w))^{1/4}

converges along {\alpha} to the {U^2} Gowers norm

\displaystyle  ( \int_{G \times G \times G} f(x) \overline{f(y)} \overline{f(z)} f_n(x+y-z)\ d\mu_G(x) d\mu_G(y) d\mu_G(z))^{1/4}

and so forth. We caution however that some correlations that involve evaluating more than one function at the same point will not necessarily be preserved in the additive limit; for instance the normalised {\ell^2} norm

\displaystyle  (\frac{1}{|A_n|} \sum_{x \in A_n^m} |f_n(x)|^2)^{1/2}

does not necessarily converge to the {L^2} norm

\displaystyle  (\int_G |f(x)|^2\ d\mu_G(x))^{1/2},

but can converge instead to a larger quantity, due to the presence of the orthogonal projection {\pi_*} in the definition (4) of {f}.

An important special case of an additive limit occurs when the functions {f_n\colon A_n^m \rightarrow {\bf C}} involved are indicator functions {f_n = 1_{E_n}} of some subsets {E_n} of {A_n^m}. The additive limit {f \in L} does not necessarily remain an indicator function, but instead takes values in {[0,1]} (much as a graphon {p} takes values in {[0,1]} even though the original indicators {1_{E_n}} take values in {\{0,1\}}). The convolution {f \star f\colon G \rightarrow [0,1]} is then the ultralimit of the normalised convolutions {\frac{1}{|A_n|} 1_{E_n} \star 1_{E_n}}; in particular, the measure of the support of {f \star f} provides a lower bound on the limiting normalised cardinality {\frac{1}{|A_n|} |E_n + E_n|} of a sumset. In many situations this lower bound is an equality, but this is not necessarily the case, because the sumset {2E_n = E_n + E_n} could contain a large number of elements which have very few ({o(|A_n|)}) representations as the sum of two elements of {E_n}, and in the limit these portions of the sumset fall outside of the support of {f \star f}. (One can think of the support of {f \star f} as describing the “essential” sumset of {2E_n = E_n + E_n}, discarding those elements that have only very few representations.) Similarly for higher convolutions of {f}. Thus one can use additive limits to partially control the growth {k E_n} of iterated sumsets of subsets {E_n} of approximate groups {A_n}, in the regime where {k} stays bounded and {n} goes to infinity.

Theorem 1 can be proven by Fourier-analytic means (combined with Freiman’s theorem from additive combinatorics), and we will do so below the fold. For now, we give some illustrative examples of additive limits.

Example 2 (Bohr sets) We take {A_n} to be the intervals {A_n := \{ x \in {\bf Z}: |x| \leq N_n \}}, where {N_n} is a sequence going to infinity; these are {2}-approximate groups for all {n}. Let {\theta} be an irrational real number, let {I} be an interval in {{\bf R}/{\bf Z}}, and for each natural number {n} let {B_n} be the Bohr set

\displaystyle  B_n := \{ x \in A^{(n)}: \theta x \hbox{ mod } 1 \in I \}.

In this case, the (reduced) Kronecker factor {G} can be taken to be the infinite cylinder {{\bf R} \times {\bf R}/{\bf Z}} with the usual Lebesgue measure {\mu_G}. The additive limits of {1_{A_n}} and {1_{B_n}} end up being {1_A} and {1_B}, where {A} is the finite cylinder

\displaystyle  A := \{ (x,t) \in {\bf R} \times {\bf R}/{\bf Z}: x \in [-1,1]\}

and {B} is the rectangle

\displaystyle  B := \{ (x,t) \in {\bf R} \times {\bf R}/{\bf Z}: x \in [-1,1]; t \in I \}.

Geometrically, one should think of {A_n} and {B_n} as being wrapped around the cylinder {{\bf R} \times {\bf R}/{\bf Z}} via the homomorphism {x \mapsto (\frac{x}{N_n}, \theta x \hbox{ mod } 1)}, and then one sees that {B_n} is converging in some normalised weak sense to {B}, and similarly for {A_n} and {A}. In particular, the additive limit predicts the growth rate of the iterated sumsets {kB_n} to be quadratic in {k} until {k|I|} becomes comparable to {1}, at which point the growth transitions to linear growth, in the regime where {k} is bounded and {n} is large.

If {\theta = \frac{p}{q}} were rational instead of irrational, then one would need to replace {{\bf R}/{\bf Z}} by the finite subgroup {\frac{1}{q}{\bf Z}/{\bf Z}} here.

Example 3 (Structured subsets of progressions) We take {A_n} be the rank two progression

\displaystyle  A_n := \{ a + b N_n^2: a,b \in {\bf Z}; |a|, |b| \leq N_n \},

where {N_n} is a sequence going to infinity; these are {4}-approximate groups for all {n}. Let {B_n} be the subset

\displaystyle  B_n := \{ a + b N_n^2: a,b \in {\bf Z}; |a|^2 + |b|^2 \leq N_n^2 \}.

Then the (reduced) Kronecker factor can be taken to be {G = {\bf R}^2} with Lebesgue measure {\mu_G}, and the additive limits of the {1_{A_n}} and {1_{B_n}} are then {1_A} and {1_B}, where {A} is the square

\displaystyle  A := \{ (a,b) \in {\bf R}^2: |a|, |b| \leq 1 \}

and {B} is the circle

\displaystyle  B := \{ (a,b) \in {\bf R}^2: a^2+b^2 \leq 1 \}.

Geometrically, the picture is similar to the Bohr set one, except now one uses a Freiman homomorphism {a + b N_n^2 \mapsto (\frac{a}{N_n}, \frac{b}{N_n})} for {a,b = O( N_n )} to embed the original sets {A_n, B_n} into the plane {{\bf R}^2}. In particular, one now expects the growth rate of the iterated sumsets {k A_n} and {k B_n} to be quadratic in {k}, in the regime where {k} is bounded and {n} is large.

Example 4 (Dissociated sets) Let {d} be a fixed natural number, and take

\displaystyle  A_n = \{0, v_1,\dots,v_d,-v_1,\dots,-v_d \}

where {v_1,\dots,v_d} are randomly chosen elements of a large cyclic group {{\bf Z}/p_n{\bf Z}}, where {p_n} is a sequence of primes going to infinity. These are {O(d)}-approximate groups. The (reduced) Kronecker factor {G} can (almost surely) then be taken to be {{\bf Z}^d} with counting measure, and the additive limit of {1_{A_n}} is {1_A}, where {A = \{ 0, e_1,\dots,e_d,-e_1,\dots,-e_d\}} and {e_1,\dots,e_d} is the standard basis of {{\bf Z}^d}. In particular, the growth rates of {k A_n} should grow approximately like {k^d} for {k} bounded and {n} large.

Example 5 (Random subsets of groups) Let {A_n = G_n} be a sequence of finite additive groups whose order is going to infinity. Let {B_n} be a random subset of {G_n} of some fixed density {0 \leq \lambda \leq 1}. Then (almost surely) the Kronecker factor here can be reduced all the way to the trivial group {\{0\}}, and the additive limit of the {1_{B_n}} is the constant function {\lambda}. The convolutions {\frac{1}{|G_n|} 1_{B_n} * 1_{B_n}} then converge in the ultralimit (modulo almost everywhere equivalence) to the pullback of {\lambda^2}; this reflects the fact that {(1-o(1))|G_n|} of the elements of {G_n} can be represented as the sum of two elements of {B_n} in {(\lambda^2 + o(1)) |G_n|} ways. In particular, {B_n+B_n} occupies a proportion {1-o(1)} of {G_n}.

Example 6 (Trigonometric series) Take {A_n = G_n = {\bf Z}/p_n {\bf C}} for a sequence {p_n} of primes going to infinity, and for each {n} let {\xi_{n,1},\xi_{n,2},\dots} be an infinite sequence of frequencies chosen uniformly and independently from {{\bf Z}/p_n{\bf Z}}. Let {f_n\colon {\bf Z}/p_n{\bf Z} \rightarrow {\bf C}} denote the random trigonometric series

\displaystyle  f_n(x) := \sum_{j=1}^\infty 2^{-j} e^{2\pi i \xi_{n,j} x / p_n }.

Then (almost surely) we can take the reduced Kronecker factor {G} to be the infinite torus {({\bf R}/{\bf Z})^{\bf N}} (with the Haar probability measure {\mu_G}), and the additive limit of the {f_n} then becomes the function {f\colon ({\bf R}/{\bf Z})^{\bf N} \rightarrow {\bf R}} defined by the formula

\displaystyle  f( (x_j)_{j=1}^\infty ) := \sum_{j=1}^\infty e^{2\pi i x_j}.

In fact, the pullback {\pi^* f} is the ultralimit of the {f_n}. As such, for any standard exponent {1 \leq q < \infty}, the normalised {l^q} norm

\displaystyle  (\frac{1}{p_n} \sum_{x \in {\bf Z}/p_n{\bf Z}} |f_n(x)|^q)^{1/q}

can be seen to converge to the limit

\displaystyle  (\int_{({\bf R}/{\bf Z})^{\bf N}} |f(x)|^q\ d\mu_G(x))^{1/q}.

The reader is invited to consider combinations of the above examples, e.g. random subsets of Bohr sets, to get a sense of the general case of Theorem 1.

It is likely that this theorem can be extended to the noncommutative setting, using the noncommutative Freiman theorem of Emmanuel Breuillard, Ben Green, and myself, but I have not attempted to do so here (see though this recent preprint of Anush Tserunyan for some related explorations); in a separate direction, there should be extensions that can control higher Gowers norms, in the spirit of the work of Szegedy.

Note: the arguments below will presume some familiarity with additive combinatorics and with nonstandard analysis, and will be a little sketchy in places.

Read the rest of this entry »

One of the first basic theorems in group theory is Cayley’s theorem, which links abstract finite groups with concrete finite groups (otherwise known as permutation groups).

Theorem 1 (Cayley’s theorem) Let {G} be a group of some finite order {n}. Then {G} is isomorphic to a subgroup {\tilde G} of the symmetric group {S_n} on {n} elements {\{1,\dots,n\}}. Furthermore, this subgroup is simply transitive: given two elements {x,y} of {\{1,\dots,n\}}, there is precisely one element {\sigma} of {\tilde G} such that {\sigma(x)=y}.

One can therefore think of {S_n} as a sort of “universal” group that contains (up to isomorphism) all the possible groups of order {n}.

Proof: The group {G} acts on itself by multiplication on the left, thus each element {g \in G} may be identified with a permutation {\sigma_g: G \rightarrow G} on {G} given by the map {\sigma_g(h) := gh}. This can be easily verified to identify {G} with a simply transitive permutation group on {G}. The claim then follows by arbitrarily identifying {G} with {\{1,\dots,n\}}. \Box

More explicitly, the permutation group {\tilde G} arises by arbitrarily enumerating {G} as {\{s_1,\dots,s_n\}} and then associating to each group element {g \in G} the permutation {\sigma_g: \{1,\dots,n\} \rightarrow \{1,\dots,n\}} defined by the formula

\displaystyle g s_i = s_{\sigma_g(i)}.

The simply transitive group {\tilde G} given by Cayley’s theorem is not unique, due to the arbitrary choice of identification of {G} with {\{1,\dots,n\}}, but is unique up to conjugation by an element of {S_n}. On the other hand, it is easy to see that every simply transitive subgroup of {S_n} is of order {n}, and that two such groups are isomorphic if and only if they are conjugate by an element of {S_n}. Thus Cayley’s theorem in fact identifies the moduli space of groups of order {n} (up to isomorphism) with the simply transitive subgroups of {S_n} (up to conjugacy by elements of {S_n}).

One can generalise Cayley’s theorem to groups of infinite order without much difficulty. But in this post, I would like to note an (easy) generalisation of Cayley’s theorem in a different direction, in which the group {G} is no longer assumed to be of order {n}, but rather to have an index {n} subgroup that is isomorphic to a fixed group {H}. The generalisation is:

Theorem 2 (Cayley’s theorem for {H}-sets) Let {H} be a group, and let {G} be a group that contains an index {n} subgroup isomorphic to {H}. Then {G} is isomorphic to a subgroup {\tilde G} of the semidirect product {S_n \ltimes H^n}, defined explicitly as the set of tuples {(\sigma, (h_i)_{i=1}^n)} with product

\displaystyle  (\sigma, (h_i)_{i=1}^n) (\rho, (k_i)_{i=1}^n) := (\sigma \circ \rho, (h_{\rho(i)} k_i)_{i=1}^n )

and inverse

\displaystyle  (\sigma, (h_i)_{i=1}^n)^{-1} := (\sigma^{-1}, (h_{\sigma(i)}^{-1})_{i=1}^n).

(This group is a wreath product of {H} with {S_n}, and is sometimes denoted {H \wr S_n}, or more precisely {H \wr_{\{1,\dots,n\}} S_n}.) Furthermore, {\tilde G} is simply transitive in the following sense: given any two elements {x,y} of {\{1,\dots,n\}} and {h,k \in H}, there is precisely one {(\sigma, (h_i)_{i=1}^n)} in {\tilde G} such that {\sigma(x)=y} and {k = h_x h}.

Of course, Theorem 1 is the special case of Theorem 2 when {H} is trivial. This theorem allows one to view {S_n \ltimes H^n} as a “universal” group for modeling all groups containing a copy of {H} as an index {n} subgroup, in exactly the same way that {S_n} is a universal group for modeling groups of order {n}. This observation is not at all deep, but I had not seen it before, so I thought I would record it here. (EDIT: as pointed out in comments, this is a slight variant of the universal embedding theorem of Krasner and Kaloujnine, which covers the case when {H} is normal, in which case one can embed {G} into the wreath product {H \wr G/H}, which is a subgroup of {H \wr S_n}.)

Proof: The basic idea here is to replace the category of sets in Theorem 1 by the category of {H}-sets, by which we mean sets {X} with a right-action of the group {H}. A morphism between two {H}-sets {X,Y} is a function {f: X \rightarrow Y} which respects the right action of {H}, thus {f(x)h = f(xh)} for all {x \in X} and {h \in H}.

Observe that if {G} contains a copy of {H} as a subgroup, then one can view {G} as an {H}-set, using the right-action of {H} (which we identify with the indicated subgroup of {G}). The left action of {G} on itself commutes with the right-action of {H}, and so we can represent {G} by {H}-set automorphisms on the {H}-set {G}.

As {H} has index {n} in {G}, we see that {G} is (non-canonically) isomorphic (as an {H}-set) to the {H}-set {\{1,\dots,n\} \times H} with the obvious right action of {H}: {(i,h) k := (i,hk)}. It is easy to see that the group of {H}-set automorphisms of {\{1,\dots,n\} \times H} can be identified with {S^n \ltimes H}, with the latter group acting on the former {H}-set by the rule

\displaystyle  (\sigma, (h_i)_{i=1}^n) (i,h) := (\sigma(i), h_i h)

(it is routine to verify that this is indeed an action of {S^n \ltimes H} by {H}-set automorphisms. It is then a routine matter to verify the claims (the simple transitivity of {\tilde G} follows from the simple transitivity of the action of {G} on itself). \Box

More explicitly, the group {\tilde G} arises by arbitrarily enumerating the left-cosets of {H} in {G} as {\{s_1H,\dots,s_nH\}} and then associating to each group element {g \in G} the element {(\sigma_g, (h_{g,i})_{i=1}^n )}, where the permutation {\sigma_g: \{1,\dots,n\} \rightarrow \{1,\dots,n\}} and the elements {h_{g,i} \in H} are defined by the formula

\displaystyle  g s_i = s_{\sigma_g(i)} h_{g,i}.

By noting that {H^n} is an index {n!} normal subgroup of {S_n \ltimes H^n}, we recover the classical result of Poincaré that any group {G} that contains {H} as an index {n} subgroup, contains a normal subgroup {N} of index dividing {n!} that is contained in {H}. (Quotienting out the {H} right-action, we recover also the classical proof of this result, as the action of {G} on itself then collapses to the action of {G} on the quotient space {G/H}, the stabiliser of which is {N}.)

Exercise 1 Show that a simply transitive subgroup {\tilde G} of {S_n \ltimes H^n} contains a copy of {H} as an index {n} subgroup; in particular, there is a canonical embedding of {H} into {\tilde G}, and {\tilde G} can be viewed as an {H}-set.

Exercise 2 Show that any two simply transitive subgroups {\tilde G_1, \tilde G_2} of {S_n \ltimes H^n} are isomorphic simultaneously as groups and as {H}-sets (that is, there is a bijection {\phi: \tilde G_1 \rightarrow \tilde G_2} that is simultaneously a group isomorphism and an {H}-set isomorphism) if and only if they are conjugate by an element of {S_n \times H_n}.

[UPDATE: Exercises corrected; thanks to Keith Conrad for some additional corrections and comments.]

Archives