You are currently browsing the tag archive for the ‘random walks’ tag.

The wave equation is usually expressed in the form

\displaystyle  \partial_{tt} u - \Delta u = 0

where {u \colon {\bf R} \times {\bf R}^d \rightarrow {\bf C}} is a function of both time {t \in {\bf R}} and space {x \in {\bf R}^d}, with {\Delta} being the Laplacian operator. One can generalise this equation in a number of ways, for instance by replacing the spatial domain {{\bf R}^d} with some other manifold and replacing the Laplacian {\Delta} with the Laplace-Beltrami operator or adding lower order terms (such as a potential, or a coupling with a magnetic field). But for sake of discussion let us work with the classical wave equation on {{\bf R}^d}. We will work formally in this post, being unconcerned with issues of convergence, justifying interchange of integrals, derivatives, or limits, etc.. One then has a conserved energy

\displaystyle  \int_{{\bf R}^d} \frac{1}{2} |\nabla u(t,x)|^2 + \frac{1}{2} |\partial_t u(t,x)|^2\ dx

which we can rewrite using integration by parts and the {L^2} inner product {\langle, \rangle} on {{\bf R}^d} as

\displaystyle  \frac{1}{2} \langle -\Delta u(t), u(t) \rangle + \frac{1}{2} \langle \partial_t u(t), \partial_t u(t) \rangle.

A key feature of the wave equation is finite speed of propagation: if, at time {t=0} (say), the initial position {u(0)} and initial velocity {\partial_t u(0)} are both supported in a ball {B(x_0,R) := \{ x \in {\bf R}^d: |x-x_0| \leq R \}}, then at any later time {t>0}, the position {u(t)} and velocity {\partial_t u(t)} are supported in the larger ball {B(x_0,R+t)}. This can be seen for instance (formally, at least) by inspecting the exterior energy

\displaystyle  \int_{|x-x_0| > R+t} \frac{1}{2} |\nabla u(t,x)|^2 + \frac{1}{2} |\partial_t u(t,x)|^2\ dx

and observing (after some integration by parts and differentiation under the integral sign) that it is non-increasing in time, non-negative, and vanishing at time {t=0}.

The wave equation is second order in time, but one can turn it into a first order system by working with the pair {(u(t),v(t))} rather than just the single field {u(t)}, where {v(t) := \partial_t u(t)} is the velocity field. The system is then

\displaystyle  \partial_t u(t) = v(t)

\displaystyle  \partial_t v(t) = \Delta u(t)

and the conserved energy is now

\displaystyle  \frac{1}{2} \langle -\Delta u(t), u(t) \rangle + \frac{1}{2} \langle v(t), v(t) \rangle. \ \ \ \ \ (1)

Finite speed of propagation then tells us that if {u(0),v(0)} are both supported on {B(x_0,R)}, then {u(t),v(t)} are supported on {B(x_0,R+t)} for all {t>0}. One also has time reversal symmetry: if {t \mapsto (u(t),v(t))} is a solution, then {t \mapsto (u(-t), -v(-t))} is a solution also, thus for instance one can establish an analogue of finite speed of propagation for negative times {t<0} using this symmetry.

If one has an eigenfunction

\displaystyle  -\Delta \phi = \lambda^2 \phi

of the Laplacian, then we have the explicit solutions

\displaystyle  u(t) = e^{\pm it \lambda} \phi

\displaystyle  v(t) = \pm i \lambda e^{\pm it \lambda} \phi

of the wave equation, which formally can be used to construct all other solutions via the principle of superposition.

When one has vanishing initial velocity {v(0)=0}, the solution {u(t)} is given via functional calculus by

\displaystyle  u(t) = \cos(t \sqrt{-\Delta}) u(0)

and the propagator {\cos(t \sqrt{-\Delta})} can be expressed as the average of half-wave operators:

\displaystyle  \cos(t \sqrt{-\Delta}) = \frac{1}{2} ( e^{it\sqrt{-\Delta}} + e^{-it\sqrt{-\Delta}} ).

One can view {\cos(t \sqrt{-\Delta} )} as a minor of the full wave propagator

\displaystyle  U(t) := \exp \begin{pmatrix} 0 & t \\ t\Delta & 0 \end{pmatrix}

\displaystyle  = \begin{pmatrix} \cos(t \sqrt{-\Delta}) & \frac{\sin(t\sqrt{-\Delta})}{\sqrt{-\Delta}} \\ \sin(t\sqrt{-\Delta}) \sqrt{-\Delta} & \cos(t \sqrt{-\Delta} ) \end{pmatrix}

which is unitary with respect to the energy form (1), and is the fundamental solution to the wave equation in the sense that

\displaystyle  \begin{pmatrix} u(t) \\ v(t) \end{pmatrix} = U(t) \begin{pmatrix} u(0) \\ v(0) \end{pmatrix}. \ \ \ \ \ (2)

Viewing the contraction {\cos(t\sqrt{-\Delta})} as a minor of a unitary operator is an instance of the “dilation trick“.

It turns out (as I learned from Yuval Peres) that there is a useful discrete analogue of the wave equation (and of all of the above facts), in which the time variable {t} now lives on the integers {{\bf Z}} rather than on {{\bf R}}, and the spatial domain can be replaced by discrete domains also (such as graphs). Formally, the system is now of the form

\displaystyle  u(t+1) = P u(t) + v(t) \ \ \ \ \ (3)

\displaystyle  v(t+1) = P v(t) - (1-P^2) u(t)

where {t} is now an integer, {u(t), v(t)} take values in some Hilbert space (e.g. {\ell^2} functions on a graph {G}), and {P} is some operator on that Hilbert space (which in applications will usually be a self-adjoint contraction). To connect this with the classical wave equation, let us first consider a rescaling of this system

\displaystyle  u(t+\varepsilon) = P_\varepsilon u(t) + \varepsilon v(t)

\displaystyle  v(t+\varepsilon) = P_\varepsilon v(t) - \frac{1}{\varepsilon} (1-P_\varepsilon^2) u(t)

where {\varepsilon>0} is a small parameter (representing the discretised time step), {t} now takes values in the integer multiples {\varepsilon {\bf Z}} of {\varepsilon}, and {P_\varepsilon} is the wave propagator operator {P_\varepsilon := \cos( \varepsilon \sqrt{-\Delta} )} or the heat propagator {P_\varepsilon := \exp( - \varepsilon^2 \Delta/2 )} (the two operators are different, but agree to fourth order in {\varepsilon}). One can then formally verify that the wave equation emerges from this rescaled system in the limit {\varepsilon \rightarrow 0}. (Thus, {P} is not exactly the direct analogue of the Laplacian {\Delta}, but can be viewed as something like {P_\varepsilon = 1 - \frac{\varepsilon^2}{2} \Delta + O( \varepsilon^4 )} in the case of small {\varepsilon}, or {P = 1 - \frac{1}{2}\Delta + O(\Delta^2)} if we are not rescaling to the small {\varepsilon} case. The operator {P} is sometimes known as the diffusion operator)

Assuming {P} is self-adjoint, solutions to the system (3) formally conserve the energy

\displaystyle  \frac{1}{2} \langle (1-P^2) u(t), u(t) \rangle + \frac{1}{2} \langle v(t), v(t) \rangle. \ \ \ \ \ (4)

This energy is positive semi-definite if {P} is a contraction. We have the same time reversal symmetry as before: if {t \mapsto (u(t),v(t))} solves the system (3), then so does {t \mapsto (u(-t), -v(-t))}. If one has an eigenfunction

\displaystyle  P \phi = \cos(\lambda) \phi

to the operator {P}, then one has an explicit solution

\displaystyle  u(t) = e^{\pm it \lambda} \phi

\displaystyle  v(t) = \pm i \sin(\lambda) e^{\pm it \lambda} \phi

to (3), and (in principle at least) this generates all other solutions via the principle of superposition.

Finite speed of propagation is a lot easier in the discrete setting, though one has to offset the support of the “velocity” field {v} by one unit. Suppose we know that {P} has unit speed in the sense that whenever {f} is supported in a ball {B(x,R)}, then {Pf} is supported in the ball {B(x,R+1)}. Then an easy induction shows that if {u(0), v(0)} are supported in {B(x_0,R), B(x_0,R+1)} respectively, then {u(t), v(t)} are supported in {B(x_0,R+t), B(x_0, R+t+1)}.

The fundamental solution {U(t) = U^t} to the discretised wave equation (3), in the sense of (2), is given by the formula

\displaystyle  U(t) = U^t = \begin{pmatrix} P & 1 \\ P^2-1 & P \end{pmatrix}^t

\displaystyle  = \begin{pmatrix} T_t(P) & U_{t-1}(P) \\ (P^2-1) U_{t-1}(P) & T_t(P) \end{pmatrix}

where {T_t} and {U_t} are the Chebyshev polynomials of the first and second kind, thus

\displaystyle  T_t( \cos \theta ) = \cos(t\theta)

and

\displaystyle  U_t( \cos \theta ) = \frac{\sin((t+1)\theta)}{\sin \theta}.

In particular, {P} is now a minor of {U(1) = U}, and can also be viewed as an average of {U} with its inverse {U^{-1}}:

\displaystyle  \begin{pmatrix} P & 0 \\ 0 & P \end{pmatrix} = \frac{1}{2} (U + U^{-1}). \ \ \ \ \ (5)

As before, {U} is unitary with respect to the energy form (4), so this is another instance of the dilation trick in action. The powers {P^n} and {U^n} are discrete analogues of the heat propagators {e^{t\Delta/2}} and wave propagators {U(t)} respectively.

One nice application of all this formalism, which I learned from Yuval Peres, is the Varopoulos-Carne inequality:

Theorem 1 (Varopoulos-Carne inequality) Let {G} be a (possibly infinite) regular graph, let {n \geq 1}, and let {x, y} be vertices in {G}. Then the probability that the simple random walk at {x} lands at {y} at time {n} is at most {2 \exp( - d(x,y)^2 / 2n )}, where {d} is the graph distance.

This general inequality is quite sharp, as one can see using the standard Cayley graph on the integers {{\bf Z}}. Very roughly speaking, it asserts that on a regular graph of reasonably controlled growth (e.g. polynomial growth), random walks of length {n} concentrate on the ball of radius {O(\sqrt{n})} or so centred at the origin of the random walk.

Proof: Let {P \colon \ell^2(G) \rightarrow \ell^2(G)} be the graph Laplacian, thus

\displaystyle  Pf(x) = \frac{1}{D} \sum_{y \sim x} f(y)

for any {f \in \ell^2(G)}, where {D} is the degree of the regular graph and sum is over the {D} vertices {y} that are adjacent to {x}. This is a contraction of unit speed, and the probability that the random walk at {x} lands at {y} at time {n} is

\displaystyle  \langle P^n \delta_x, \delta_y \rangle

where {\delta_x, \delta_y} are the Dirac deltas at {x,y}. Using (5), we can rewrite this as

\displaystyle  \langle (\frac{1}{2} (U + U^{-1}))^n \begin{pmatrix} 0 \\ \delta_x\end{pmatrix}, \begin{pmatrix} 0 \\ \delta_y\end{pmatrix} \rangle

where we are now using the energy form (4). We can write

\displaystyle  (\frac{1}{2} (U + U^{-1}))^n = {\bf E} U^{S_n}

where {S_n} is the simple random walk of length {n} on the integers, that is to say {S_n = \xi_1 + \dots + \xi_n} where {\xi_1,\dots,\xi_n = \pm 1} are independent uniform Bernoulli signs. Thus we wish to show that

\displaystyle  {\bf E} \langle U^{S_n} \begin{pmatrix} 0 \\ \delta_x\end{pmatrix}, \begin{pmatrix} 0 \\ \delta_y\end{pmatrix} \rangle \leq 2 \exp(-d(x,y)^2 / 2n ).

By finite speed of propagation, the inner product here vanishes if {|S_n| < d(x,y)}. For {|S_n| \geq d(x,y)} we can use Cauchy-Schwarz and the unitary nature of {U} to bound the inner product by {1}. Thus the left-hand side may be upper bounded by

\displaystyle  {\bf P}( |S_n| \geq d(x,y) )

and the claim now follows from the Chernoff inequality. \Box

This inequality has many applications, particularly with regards to relating the entropy, mixing time, and concentration of random walks with volume growth of balls; see this text of Lyons and Peres for some examples.

For sake of comparison, here is a continuous counterpart to the Varopoulos-Carne inequality:

Theorem 2 (Continuous Varopoulos-Carne inequality) Let {t > 0}, and let {f,g \in L^2({\bf R}^d)} be supported on compact sets {F,G} respectively. Then

\displaystyle  |\langle e^{t\Delta/2} f, g \rangle| \leq \sqrt{\frac{2t}{\pi d(F,G)^2}} \exp( - d(F,G)^2 / 2t ) \|f\|_{L^2} \|g\|_{L^2}

where {d(F,G)} is the Euclidean distance between {F} and {G}.

Proof: By Fourier inversion one has

\displaystyle  e^{-t\xi^2/2} = \frac{1}{\sqrt{2\pi t}} \int_{\bf R} e^{-s^2/2t} e^{is\xi}\ ds

\displaystyle  = \sqrt{\frac{2}{\pi t}} \int_0^\infty e^{-s^2/2t} \cos(s \xi )\ ds

for any real {\xi}, and thus

\displaystyle  \langle e^{t\Delta/2} f, g\rangle = \sqrt{\frac{2}{\pi}} \int_0^\infty e^{-s^2/2t} \langle \cos(s \sqrt{-\Delta} ) f, g \rangle\ ds.

By finite speed of propagation, the inner product {\langle \cos(s \sqrt{-\Delta} ) f, g \rangle\ ds} vanishes when {s < d(F,G)}; otherwise, we can use Cauchy-Schwarz and the contractive nature of {\cos(s \sqrt{-\Delta} )} to bound this inner product by {\|f\|_{L^2} \|g\|_{L^2}}. Thus

\displaystyle  |\langle e^{t\Delta/2} f, g\rangle| \leq \sqrt{\frac{2}{\pi t}} \|f\|_{L^2} \|g\|_{L^2} \int_{d(F,G)}^\infty e^{-s^2/2t}\ ds.

Bounding {e^{-s^2/2t}} by {e^{-d(F,G)^2/2t} e^{-d(F,G) (s-d(F,G))/t}}, we obtain the claim. \Box

Observe that the argument is quite general and can be applied for instance to other Riemannian manifolds than {{\bf R}^d}.

In the last three notes, we discussed the Bourgain-Gamburd expansion machine and two of its three ingredients, namely quasirandomness and product theorems, leaving only the non-concentration ingredient to discuss. We can summarise the results of the last three notes, in the case of fields of prime order, as the following theorem.

Theorem 1 (Non-concentration implies expansion in {SL_d}) Let {p} be a prime, let {d \geq 1}, and let {S} be a symmetric set of elements in {G := SL_d(F_p)} of cardinality {|S|=k} not containing the identity. Write {\mu := \frac{1}{|S|} \sum_{s\in S}\delta_s}, and suppose that one has the non-concentration property

\displaystyle  \sup_{H < G}\mu^{(n)}(H) < |G|^{-\kappa} \ \ \ \ \ (1)

for some {\kappa>0} and some even integer {n \leq \Lambda \log |G|}. Then {Cay(G,S)} is a two-sided {\epsilon}-expander for some {\epsilon>0} depending only on {k, d, \kappa,\Lambda}.

Proof: From (1) we see that {\mu^{(n)}} is not supported in any proper subgroup {H} of {G}, which implies that {S} generates {G}. The claim now follows from the Bourgain-Gamburd expansion machine (Theorem 2 of Notes 4), the product theorem (Theorem 1 of Notes 5), and quasirandomness (Exercise 8 of Notes 3). \Box

Remark 1 The same argument also works if we replace {F_p} by the field {F_{p^j}} of order {p^j} for some bounded {j}. However, there is a difficulty in the regime when {j} is unbounded, because the quasirandomness property becomes too weak for the Bourgain-Gamburd expansion machine to be directly applicable. On theother hand, the above type of theorem was generalised to the setting of cyclic groups {{\bf Z}/q{\bf Z}} with {q} square-free by Varju, to arbitrary {q} by Bourgain and Varju, and to more general algebraic groups than {SL_d} and square-free {q} by Salehi Golsefidy and Varju. It may be that some modification of the proof techniques in these papers may also be able to handle the field case {F_{p^j}} with unbounded {j}.

It thus remains to construct tools that can establish the non-concentration property (1). The situation is particularly simple in {SL_2(F_p)}, as we have a good understanding of the subgroups of that group. Indeed, from Theorem 14 from Notes 5, we obtain the following corollary to Theorem 1:

Corollary 2 (Non-concentration implies expansion in {SL_2}) Let {p} be a prime, and let {S} be a symmetric set of elements in {G := SL_2(F_p)} of cardinality {|S|=k} not containing the identity. Write {\mu := \frac{1}{|S|} \sum_{s\in S}\delta_s}, and suppose that one has the non-concentration property

\displaystyle  \sup_{B}\mu^{(n)}(B) < |G|^{-\kappa} \ \ \ \ \ (2)

for some {\kappa>0} and some even integer {n \leq \Lambda \log |G|}, where {B} ranges over all Borel subgroups of {SL_2(\overline{F})}. Then, if {|G|} is sufficiently large depending on {k,\kappa,\Lambda}, {Cay(G,S)} is a two-sided {\epsilon}-expander for some {\epsilon>0} depending only on {k, \kappa,\Lambda}.

It turns out (2) can be verified in many cases by exploiting the solvable nature of the Borel subgroups {B}. We give two examples of this in these notes. The first result, due to Bourgain and Gamburd (with earlier partial results by Gamburd and by Shalom) generalises Selberg’s expander construction to the case when {S} generates a thin subgroup of {SL_2({\bf Z})}:

Theorem 3 (Expansion in thin subgroups) Let {S} be a symmetric subset of {SL_2({\bf Z})} not containing the identity, and suppose that the group {\langle S \rangle} generated by {S} is not virtually solvable. Then as {p} ranges over all sufficiently large primes, the Cayley graphs {Cay(SL_2(F_p), \pi_p(S))} form a two-sided expander family, where {\pi_p: SL_2({\bf Z}) \rightarrow SL_2(F_p)} is the usual projection.

Remark 2 One corollary of Theorem 3 (or of the non-concentration estimate (3) below) is that {\pi_p(S)} generates {SL_2(F_p)} for all sufficiently large {p}, if {\langle S \rangle} is not virtually solvable. This is a special case of a much more general result, known as the strong approximation theorem, although this is certainly not the most direct way to prove such a theorem. Conversely, the strong approximation property is used in generalisations of this result to higher rank groups than {SL_2}.

Exercise 1 In the converse direction, if {\langle S\rangle} is virtually solvable, show that for sufficiently large {p}, {\pi_p(S)} fails to generate {SL_2(F_p)}. (Hint: use Theorem 14 from Notes 5 to prevent {SL_2(F_p)} from having bounded index solvable subgroups.)

Exercise 2 (Lubotzsky’s 1-2-3 problem) Let {S := \{ \begin{pmatrix}1 & \pm 3 \\ 0 & 1 \end{pmatrix}, \begin{pmatrix}1 & 0 \\ \pm 3 & 1 \end{pmatrix}}.

  • (i) Show that {S} generates a free subgroup of {SL_2({\bf Z})}. (Hint: use a ping-pong argument, as in Exercise 23 of Notes 2.)
  • (ii) Show that if {v, w} are two distinct elements of the sector {\{ (x,y) \in {\bf R}^2_+: x/2 < y < 2x \}}, then there os no element {g \in \langle S \rangle} for which {gv = w}. (Hint: this is another ping-pong argument.) Conclude that {\langle S \rangle} has infinite index in {SL_2({\bf Z})}. (Contrast this with the situation in which the {3} coefficients in {S} are replaced by {1} or {2}, in which case {\langle S \rangle} is either all of {SL_2({\bf Z})}, or a finite index subgroup, as demonstrated in Exercise 23 of Notes 2).
  • (iii) Show that {Cay(SL_2(F_p), \pi_p(S))} for sufficiently large primes {p} form a two-sided expander family.

Remark 3 Theorem 3 has been generalised to arbitrary linear groups, and with {F_p} replaced by {{\bf Z}/q{\bf Z}} for square-free {q}; see this paper of Salehi Golsefidy and Varju. In this more general setting, the condition of virtual solvability must be replaced by the condition that the connected component of the Zariski closure of {\langle S \rangle} is perfect. An effective version of Theorem 3 (with completely explicit constants) was recently obtained by Kowalski.

The second example concerns Cayley graphs constructed using random elements of {SL_2(F_p)}.

Theorem 4 (Random generators expand) Let {p} be a prime, and let {x,y} be two elements of {SL_2(F_p)} chosen uniformly at random. Then with probability {1-o_{p \rightarrow \infty}(1)}, {Cay(SL_2(F_p), \{x,x^{-1},y,y^{-1}\})} is a two-sided {\epsilon}-expander for some absolute constant {\epsilon}.

Remark 4 As with Theorem 3, Theorem 4 has also been extended to a number of other groups, such as the Suzuki groups (in this paper of Breuillard, Green, and Tao), and more generally to finite simple groups of Lie type of bounded rank (in forthcoming work of Breuillard, Green, Guralnick, and Tao). There are a number of other constructions of expanding Cayley graphs in such groups (and in other interesting groups, such as the alternating groups) beyond those discussed in these notes; see this recent survey of Lubotzky for further discussion. It has been conjectured by Lubotzky and Weiss that any pair {x,y} of (say) {SL_2(F_p)} that generates the group, is a two-sided {\epsilon}-expander for an absolute constant {\epsilon}: in the case of {SL_2(F_p)}, this has been established for a density one set of primes by Breuillard and Gamburd.

— 1. Expansion in thin subgroups —

We now prove Theorem 3. The first observation is that the expansion property is monotone in the group {\langle S \rangle}:

Exercise 3 Let {S, S'} be symmetric subsets of {SL_2({\bf Z})} not containing the identity, such that {\langle S \rangle \subset \langle S' \rangle}. Suppose that {Cay(SL_2(F_p), \pi_p(S))} is a two-sided expander family for sufficiently large primes {p}. Show that {Cay(SL_2(F_p), \pi_p(S'))} is also a two-sided expander family.

As a consequence, Theorem 3 follows from the following two statments:

Theorem 5 (Tits alternative) Let {\Gamma \subset SL_2({\bf Z})} be a group. Then exactly one of the following statements holds:

  • (i) {\Gamma} is virtually solvable.
  • (ii) {\Gamma} contains a copy of the free group {F_2} of two generators as a subgroup.

Theorem 6 (Expansion in free groups) Let {x,y \in SL_2({\bf Z})} be generators of a free subgroup of {SL_2({\bf Z})}. Then as {p} ranges over all sufficiently large primes, the Cayley graphs {Cay(SL_2(F_p), \pi_p(\{x,y,x^{-1},y^{-1}\}))} form a two-sided expander family.

Theorem 5 is a special case of the famous Tits alternative, which among other things allows one to replace {SL_2({\bf Z})} by {GL_d(k)} for any {d \geq 1} and any field {k} of characteristic zero (and fields of positive characteristic are also allowed, if one adds the requirement that {\Gamma} be finitely generated). We will not prove the full Tits alternative here, but instead just give an ad hoc proof of the special case in Theorem 5 in the following exercise.

Exercise 4 Given any matrix {g \in SL_2({\bf Z})}, the singular values are {\|g\|_{op}} and {\|g\|_{op}^{-1}}, and we can apply the singular value decomposition to decompose

\displaystyle  g = u_1(g) \|g\|_{op} v_1^*(g) + u_2(g) \|g\|_{op}^{-1} v_2(g)^*

where {u_1(g),u_2(g)\in {\bf C}^2} and {v_1(g), v_2(g) \in {\bf C}^2} are orthonormal bases. (When {\|g\|_{op}>1}, these bases are uniquely determined up to phase rotation.) We let {\tilde u_1(g) \in {\bf CP}^1} be the projection of {u_1(g)} to the projective complex plane, and similarly define {\tilde v_2(g)}.

Let {\Gamma} be a subgroup of {SL_2({\bf Z})}. Call a pair {(u,v) \in {\bf CP}^1 \times {\bf CP}^1} a limit point of {\Gamma} if there exists a sequence {g_n \in \Gamma} with {\|g_n\|_{op} \rightarrow \infty} and {(\tilde u_1(g_n), \tilde v_2(g_n)) \rightarrow (u,v)}.

  • (i) Show that if {\Gamma} is infinite, then there is at least one limit point.
  • (ii) Show that if {(u,v)} is a limit point, then so is {(v,u)}.
  • (iii) Show that if there are two limit points {(u,v), (u',v')} with {\{u,v\} \cap \{u',v'\} = \emptyset}, then there exist {g,h \in \Gamma} that generate a free group. (Hint: Choose {(\tilde u_1(g), \tilde v_2(g))} close to {(u,v)} and {(\tilde u_1(h),\tilde v_2(h))} close to {(u',v')}, and consider the action of {g} and {h} on {{\bf CP}^1}, and specifically on small neighbourhoods of {u,v,u',v'}, and set up a ping-pong type situation.)
  • (iv) Show that if {g \in SL_2({\bf Z})} is hyperbolic (i.e. it has an eigenvalue greater than 1), with eigenvectors {u,v}, then the projectivisations {(\tilde u,\tilde v)} of {u,v} form a limit point. Similarly, if {g} is regular parabolic (i.e. it has an eigenvalue at 1, but is not the identity) with eigenvector {u}, show that {(\tilde u,\tilde bu)} is a limit point.
  • (v) Show that if {\Gamma} has no free subgroup of two generators, then all hyperbolic and regular parabolic elements of {\Gamma} have a common eigenvector. Conclude that all such elements lie in a solvable subgroup of {\Gamma}.
  • (vi) Show that if an element {g \in SL_2({\bf Z})} is neither hyperbolic nor regular parabolic, and is not a multiple of the identity, then {g} is conjugate to a rotation by {\pi/2} (in particular, {g^2=-1}).
  • (vii) Establish Theorem 5. (Hint: show that two square roots of {-1} in {SL_2({\bf Z})} cannot multiply to another square root of {-1}.)

Now we prove Theorem 6. Let {\Gamma} be a free subgroup of {SL_2({\bf Z})} generated by two generators {x,y}. Let {\mu := \frac{1}{4} (\delta_x +\delta_{x^{-1}} + \delta_y + \delta_{y^{-1}})} be the probability measure generating a random walk on {SL_2({\bf Z})}, thus {(\pi_p)_* \mu} is the corresponding generator on {SL_2(F_p)}. By Corollary 2, it thus suffices to show that

\displaystyle  \sup_{B}((\pi_p)_* \mu)^{(n)}(B) < p^{-\kappa} \ \ \ \ \ (3)

for all sufficiently large {p}, some absolute constant {\kappa>0}, and some even {n = O(\log p)} (depending on {p}, of course), where {B} ranges over Borel subgroups.

As {\pi_p} is a homomorphism, one has {((\pi_p)_* \mu)^{(n)}(B) = (\pi_p)_* (\mu^{(n)})(B) = \mu^{(n)}(\pi_p^{-1}(B))} and so it suffices to show that

\displaystyle  \sup_{B} \mu^{(n)}(\pi_p^{-1}(B)) < p^{-\kappa}.

To deal with the supremum here, we will use an argument of Bourgain and Gamburd, taking advantage of the fact that all Borel groups of {SL_2} obey a common group law, the point being that free groups such as {\Gamma} obey such laws only very rarely. More precisely, we use the fact that the Borel groups are solvable of derived length two; in particular we have

\displaystyle  [[a,b],[c,d]] = 1 \ \ \ \ \ (4)

for all {a,b,c,d \in B}. Now, {\mu^{(n)}} is supported on matrices in {SL_2({\bf Z})} whose coefficients have size {O(\exp(O(n)))} (where we allow the implied constants to depend on the choice of generators {x,y}), and so {(\pi_p)_*( \mu^{(n)} )} is supported on matrices in {SL_2(F_p)} whose coefficients also have size {O(\exp(O(n)))}. If {n} is less than a sufficiently small multiple of {\log p}, these coefficients are then less than {p^{1/10}} (say). As such, if {\tilde a,\tilde b,\tilde c,\tilde d \in SL_2({\bf Z})} lie in the support of {\mu^{(n)}} and their projections {a = \pi_p(\tilde a), \ldots, d = \pi_p(\tilde d)} obey the word law (4) in {SL_2(F_p)}, then the original matrices {\tilde a, \tilde b, \tilde c, \tilde d} obey the word law (4) in {SL_2({\bf Z})}. (This lifting of identities from the characteristic {p} setting of {SL_2(F_p)} to the characteristic {0} setting of {SL_2({\bf Z})} is a simple example of the “Lefschetz principle”.)

To summarise, if we let {E_{n,p,B}} be the set of all elements of {\pi_p^{-1}(B)} that lie in the support of {\mu^{(n)}}, then (4) holds for all {a,b,c,d \in E_{n,p,B}}. This severely limits the size of {E_{n,p,B}} to only be of polynomial size, rather than exponential size:

Proposition 7 Let {E} be a subset of the support of {\mu^{(n)}} (thus, {E} consists of words in {x,y,x^{-1},y^{-1}} of length {n}) such that the law (4) holds for all {a,b,c,d \in E}. Then {|E| \ll n^2}.

The proof of this proposition is laid out in the exercise below.

Exercise 5 Let {\Gamma} be a free group generated by two generators {x,y}. Let {B} be the set of all words of length at most {n} in {x,y,x^{-1},y^{-1}}.

  • (i) Show that if {a,b \in \Gamma} commute, then {a, b} lie in the same cyclic group, thus {a = c^i, b = c^j} for some {c \in \Gamma} and {i,j \in {\bf Z}}.
  • (ii) Show that if {a \in \Gamma}, there are at most {O(n)} elements of {B} that commute with {a}.
  • (iii) Show that if {a,c \in \Gamma}, there are at most {O(n)} elements {b} of {B} with {[a,b] = c}.
  • (iv) Prove Proposition 7.

Now we can conclude the proof of Theorem 3:

Exercise 6 Let {\Gamma} be a free group generated by two generators {x,y}.

  • (i) Show that {\| \mu^{(n)} \|_{\ell^\infty(\Gamma)} \ll c^n} for some absolute constant {0 < c<1}. (For much more precise information on {\mu^{(n)}}, see this paper of Kesten.)
  • (ii) Conclude the proof of Theorem 3.

— 2. Random generators expand —

We now prove Theorem 4. Let {{\bf F}_2} be the free group on two formal generators {a,b}, and let {\mu := \frac{1}{4}(\delta_a + \delta_b + \delta_{a^{-1}}+ \delta_{b^{-1}}} be the generator of the random walk. For any word {w \in {\bf F}_2} and any {x,y} in a group {G}, let {w(x,y) \in G} be the element of {G} formed by substituting {x,y} for {a,b} respectively in the word {w}; thus {w} can be viewed as a map {w: G \times G \rightarrow G} for any group {G}. Observe that if {w} is drawn randomly using the distribution {\mu^{(n)}}, and {x,y \in SL_2(F_p)}, then {w(x,y)} is distributed according to the law {\tilde \mu^{(n)}}, where {\tilde \mu := \frac{1}{4}(\delta_x + \delta_y + \delta_{x^{-1}}+ \delta_{y^{-1}})}. Applying Corollary 2, it suffices to show that whenever {p} is a large prime and {x,y} are chosen uniformly and independently at random from {SL_2(F_p)}, that with probability {1-o_{p \rightarrow \infty}(1)}, one has

\displaystyle  \sup_B {\bf P}_w ( w(x,y) \in B ) \leq p^{-\kappa} \ \ \ \ \ (5)

for some absolute constant {\kappa}, where {B} ranges over all Borel subgroups of {SL_2(\overline{F_p})} and {w} is drawn from the law {\mu^{(n)}} for some even natural number {n = O(\log p)}.

Let {B_n} denote the words in {{\bf F}_2} of length at most {n}. We may use the law (4) to obtain good bound on the supremum in (5) assuming a certain non-degeneracy property of the word evaluations {w(x,y)}:

Exercise 7 Let {n} be a natural number, and suppose that {x,y \in SL_2(F_p)} is such that {w(x,y) \neq 1} for {w \in B_{100n} \backslash \{1\}}. Show that

\displaystyle  \sup_B {\bf P}_w ( w(x,y) \in B ) \ll \exp(-cn)

for some absolute constant {c>0}, where {w} is drawn from the law {\mu^{(n)}}. (Hint: use (4) and the hypothesis to lift the problem up to {{\bf F}_2}, at which point one can use Proposition 7 and Exercise 6.)

In view of this exercise, it suffices to show that with probability {1-o_{p \rightarrow\infty}(1)}, one has {w(x,y) \neq 1} for all {w \in B_{100n} \backslash \{1\}} for some {n} comparable to a small multiple of {\log p}. As {B_{100n}} has {\exp(O(n))} elements, it thus suffices by the union bound to show that

\displaystyle  {\bf P}_{x,y}(w(x,y)=1) \leq p^{-\gamma} \ \ \ \ \ (6)

for some absolute constant {\gamma > 0}, and any {w \in {\bf F}_2 \backslash \{1\}} of length less than {c\log p} for some sufficiently small absolute constant {c>0}.

Let us now fix a non-identity word {w} of length {|w|} less than {c\log p}, and consider {w} as a function from {SL_2(k) \times SL_2(k)} to {SL_2(k)} for an arbitrary field {k}. We can identify {SL_2(k)} with the set {\{ (a,b,c,d)\in k^4: ad-bc=1\}}. A routine induction then shows that the expression {w((a,b,c,d),(a',b',c',d'))} is then a polynomial in the eight variables {a,b,c,d,a',b',c',d'} of degree {O(|w|)} and coefficients which are integers of size {O( \exp( O(|w|) ) )}. Let us then make the additional restriction to the case {a,a' \neq 0}, in which case we can write {d = \frac{bc+1}{a}} and {d' =\frac{b'c'+1}{a'}}. Then {w((a,b,c,d),(a',b',c',d'))} is now a rational function of {a,b,c,a',b',c'} whose numerator is a polynomial of degree {O(|w|)} and coefficients of size {O( \exp( O(|w|) ) )}, and the denominator is a monomial of {a,a'} of degree {O(|w|)}.

We then specialise this rational function to the field {k=F_p}. It is conceivable that when one does so, the rational function collapses to the constant polynomial {(1,0,0,1)}, thus {w((a,b,c,d),(a',b',c',d'))=1} for all {(a,b,c,d),(a',b',c',d') \in SL_2(F_p)} with {a,a' \neq 0}. (For instance, this would be the case if {w(x,y) = x^{|SL_2(F_p)|}}, by Lagrange’s theorem, if it were not for the fact that {|w|} is far too large here.) But suppose that this rational function does not collapse to the constant rational function. Applying the Schwarz-Zippel lemma (Exercise 23 from Notes 5), we then see that the set of pairs {(a,b,c,d),(a',b',c',d') \in SL_2(F_p)} with {a,a' \neq 0} and {w((a,b,c,d),(a',b',c',d'))=1} is at most {O( |w| p^5 )}; adding in the {a=0} and {a'=0} cases, one still obtains a bound of {O(|w|p^5)}, which is acceptable since {|SL_2(F_p)|^2 \sim p^6} and {|w| = O( \log p )}. Thus, the only remaining case to consider is when the rational function {w((a,b,c,d),(a',b',c',d'))} is identically {1} on {SL_2(F_p)} with {a,a' \neq 0}.

Now we perform another “Lefschetz principle” maneuvre to change the underlying field. Recall that the denominator of rational function {w((a,b,c,d),(a',b',c',d'))} is monomial in {a,a'}, and the numerator has coefficients of size {O(\exp(O(|w|)))}. If {|w|} is less than {c\log p} for a sufficiently small {p}, we conclude in particular (for {p} large enough) that the coefficients all have magnitude less than {p}. As such, the only way that this function can be identically {1} on {SL_2(F_p)} is if it is identically {1} on {SL_2(k)} for all {k} with {a,a' \neq 0}, and hence for {a=0} or {a'=0} also by taking Zariski closures.

On the other hand, we know that for some choices of {k}, e.g. {k={\bf R}}, {SL_2(k)} contains a copy {\Gamma} of the free group on two generators (see e.g. Exercise 23 of Notes 2). As such, it is not possible for any non-identity word {w} to be identically trivial on {SL_2(k) \times SL_2(k)}. Thus this case cannot actually occur, completing the proof of (6) and hence of Theorem 4.

Remark 5 We see from the above argument that the existence of subgroups {\Gamma} of an algebraic group with good “independence” properties – such as that of generating a free group – can be useful in studying the expansion properties of that algebraic group, even if the field of interest in the latter is distinct from that of the former. For more complicated algebraic groups than {SL_2}, in which laws such as (4) are not always available, it turns out to be useful to place further properties on the subgroup {\Gamma}, for instance by requiring that all non-abelian subgroups of that group be Zariski dense (a property which has been called strong density), as this turns out to be useful for preventing random walks from concentrating in proper algebraic subgroups. See this paper of Breuillard, Guralnick, Green and Tao for constructions of strongly dense free subgroups of algebraic groups and further discussion.

In the previous set of notes we saw how a representation-theoretic property of groups, namely Kazhdan’s property (T), could be used to demonstrate expansion in Cayley graphs. In this set of notes we discuss a different representation-theoretic property of groups, namely quasirandomness, which is also useful for demonstrating expansion in Cayley graphs, though in a somewhat different way to property (T). For instance, whereas property (T), being qualitative in nature, is only interesting for infinite groups such as {SL_d({\bf Z})} or {SL_d({\bf R})}, and only creates Cayley graphs after passing to a finite quotient, quasirandomness is a quantitative property which is directly applicable to finite groups, and is able to deduce expansion in a Cayley graph, provided that random walks in that graph are known to become sufficiently “flat” in a certain sense.

The definition of quasirandomness is easy enough to state:

Definition 1 (Quasirandom groups) Let {G} be a finite group, and let {D \geq 1}. We say that {G} is {D}-quasirandom if all non-trivial unitary representations {\rho: G \rightarrow U(H)} of {G} have dimension at least {D}. (Recall a representation is trivial if {\rho(g)} is the identity for all {g \in G}.)

Exercise 1 Let {G} be a finite group, and let {D \geq 1}. A unitary representation {\rho: G \rightarrow U(H)} is said to be irreducible if {H} has no {G}-invariant subspaces other than {\{0\}} and {H}. Show that {G} is {D}-quasirandom if and only if every non-trivial irreducible representation of {G} has dimension at least {D}.

Remark 1 The terminology “quasirandom group” was introduced explicitly (though with slightly different notational conventions) by Gowers in 2008 in his detailed study of the concept; the name arises because dense Cayley graphs in quasirandom groups are quasirandom graphs in the sense of Chung, Graham, and Wilson, as we shall see below. This property had already been used implicitly to construct expander graphs by Sarnak and Xue in 1991, and more recently by Gamburd in 2002 and by Bourgain and Gamburd in 2008. One can of course define quasirandomness for more general locally compact groups than the finite ones, but we will only need this concept in the finite case. (A paper of Kunze and Stein from 1960, for instance, exploits the quasirandomness properties of the locally compact group {SL_2({\bf R})} to obtain mixing estimates in that group.)

Quasirandomness behaves fairly well with respect to quotients and short exact sequences:

Exercise 2 Let {0 \rightarrow H \rightarrow G \rightarrow K \rightarrow 0} be a short exact sequence of finite groups {H,G,K}.

  • (i) If {G} is {D}-quasirandom, show that {K} is {D}-quasirandom also. (Equivalently: any quotient of a {D}-quasirandom finite group is again a {D}-quasirandom finite group.)
  • (ii) Conversely, if {H} and {K} are both {D}-quasirandom, show that {G} is {D}-quasirandom also. (In particular, the direct or semidirect product of two {D}-quasirandom finite groups is again a {D}-quasirandom finite group.)

Informally, we will call {G} quasirandom if it is {D}-quasirandom for some “large” {D}, though the precise meaning of “large” will depend on context. For applications to expansion in Cayley graphs, “large” will mean “{D \geq |G|^c} for some constant {c>0} independent of the size of {G}“, but other regimes of {D} are certainly of interest.

The way we have set things up, the trivial group {G = \{1\}} is infinitely quasirandom (i.e. it is {D}-quasirandom for every {D}). This is however a degenerate case and will not be discussed further here. In the non-trivial case, a finite group can only be quasirandom if it is large and has no large subgroups:

Exercise 3 Let {D \geq 1}, and let {G} be a finite {D}-quasirandom group.

  • (i) Show that if {G} is non-trivial, then {|G| \geq D+1}. (Hint: use the mean zero component {\tau\downharpoonright_{\ell^2(G)_0}} of the regular representation {\tau: G \rightarrow U(\ell^2(G))}.) In particular, non-trivial finite groups cannot be infinitely quasirandom.
  • (ii) Show that any proper subgroup {H} of {G} has index {[G:H] \geq D+1}. (Hint: use the mean zero component of the quasiregular representation.)

The following exercise shows that quasirandom groups have to be quite non-abelian, and in particular perfect:

Exercise 4 (Quasirandomness, abelianness, and perfection) Let {G} be a finite group.

  • (i) If {G} is abelian and non-trivial, show that {G} is not {2}-quasirandom. (Hint: use Fourier analysis or the classification of finite abelian groups.)
  • (ii) Show that {G} is {2}-quasirandom if and only if it is perfect, i.e. the commutator group {[G,G]} is equal to {G}. (Equivalently, {G} is {2}-quasirandom if and only if it has no non-trivial abelian quotients.)

Later on we shall see that there is a converse to the above two exercises; any non-trivial perfect finite group with no large subgroups will be quasirandom.

Exercise 5 Let {G} be a finite {D}-quasirandom group. Show that for any subgroup {G'} of {G}, {G'} is {D/[G:G']}-quasirandom, where {[G:G'] := |G|/|G'|} is the index of {G'} in {G}. (Hint: use induced representations.)

Now we give an example of a more quasirandom group.

Lemma 2 (Frobenius lemma) If {F_p} is a field of some prime order {p}, then {SL_2(F_p)} is {\frac{p-1}{2}}-quasirandom.

This should be compared with the cardinality {|SL_2(F_p)|} of the special linear group, which is easily computed to be {(p^2-1) \times p = p^3 - p}.

Proof: We may of course take {p} to be odd. Suppose for contradiction that we have a non-trivial representation {\rho: SL_2(F_p) \rightarrow U_d({\bf C})} on a unitary group of some dimension {d} with {d < \frac{p-1}{2}}. Set {a} to be the group element

\displaystyle a := \begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix},

and suppose first that {\rho(a)} is non-trivial. Since {a^p=1}, we have {\rho(a)^p=1}; thus all the eigenvalues of {\rho(a)} are {p^{th}} roots of unity. On the other hand, by conjugating {a} by diagonal matrices in {SL_2(F_p)}, we see that {a} is conjugate to {a^m} (and hence {\rho(a)} conjugate to {\rho(a)^m}) whenever {m} is a quadratic residue mod {p}. As such, the eigenvalues of {\rho(a)} must be permuted by the operation {x \mapsto x^m} for any quadratic residue mod {p}. Since {\rho(a)} has at least one non-trivial eigenvalue, and there are {\frac{p-1}{2}} distinct quadratic residues, we conclude that {\rho(a)} has at least {\frac{p-1}{2}} distinct eigenvalues. But {\rho(a)} is a {d \times d} matrix with {d < \frac{p-1}{2}}, a contradiction. Thus {a} lies in the kernel of {\rho}. By conjugation, we then see that this kernel contains all unipotent matrices. But these matrices generate {SL_2(F_p)} (see exercise below), and so {\rho} is trivial, a contradiction. \Box

Exercise 6 Show that for any prime {p}, the unipotent matrices

\displaystyle \begin{pmatrix} 1 & t \\ 0 & 1 \end{pmatrix}, \begin{pmatrix} 1 & 0 \\ t & 1 \end{pmatrix}

for {t} ranging over {F_p} generate {SL_2(F_p)} as a group.

Exercise 7 Let {G} be a finite group, and let {D \geq 1}. If {G} is generated by a collection {G_1,\ldots,G_k} of {D}-quasirandom subgroups, show that {G} is itself {D}-quasirandom.

Exercise 8 Show that {SL_d(F_p)} is {\frac{p-1}{2}}-quasirandom for any {d \geq 2} and any prime {p}. (This is not sharp; the optimal bound here is {\gg_d p^{d-1}}, which follows from the results of Landazuri and Seitz.)

As a corollary of the above results and Exercise 2, we see that the projective special linear group {PSL_d(F_p)} is also {\frac{p-1}{2}}-quasirandom.

Remark 2 One can ask whether the bound {\frac{p-1}{2}} in Lemma 2 is sharp, assuming of course that {p} is odd. Noting that {SL_2(F_p)} acts linearly on the plane {F_p^2}, we see that it also acts projectively on the projective line {PF_p^1 := (F_p^2 \backslash \{0\}) / F_p^\times}, which has {p+1} elements. Thus {SL_2(F_p)} acts via the quasiregular representation on the {p+1}-dimensional space {\ell^2(PF_p^1)}, and also on the {p}-dimensional subspace {\ell^2(PF_p^1)_0}; this latter representation (known as the Steinberg representation) is irreducible. This shows that the {\frac{p-1}{2}} bound cannot be improved beyond {p}. More generally, given any character {\chi: F_p^\times \rightarrow S^1}, {SL_2(F_p)} acts on the {p+1}-dimensional space {V_\chi} of functions {f \in \ell^2( F_p^2 \backslash \{0\} )} that obey the twisted dilation invariance {f(tx) = \chi(t) f(x)} for all {t \in F_p^\times} and {x \in F_p^2 \backslash \{0\}}; these are known as the principal series representations. When {\chi} is the trivial character, this is the quasiregular representation discussed earlier. For most other characters, this is an irreducible representation, but it turns out that when {\chi} is the quadratic representation (thus taking values in {\{-1,+1\}} while being non-trivial), the principal series representation splits into the direct sum of two {\frac{p+1}{2}}-dimensional representations, which comes very close to matching the bound in Lemma 2. There is a parallel series of representations to the principal series (known as the discrete series) which is more complicated to describe (roughly speaking, one has to embed {F_p} in a quadratic extension {F_{p^2}} and then use a rotated version of the above construction, to change a split torus into a non-split torus), but can generate irreducible representations of dimension {\frac{p-1}{2}}, showing that the bound in Lemma 2 is in fact exactly sharp. These constructions can be generalised to arbitrary finite groups of Lie type using Deligne-Luzstig theory, but this is beyond the scope of this course (and of my own knowledge in the subject).

Exercise 9 Let {p} be an odd prime. Show that for any {n \geq p+2}, the alternating group {A_n} is {p-1}-quasirandom. (Hint: show that all cycles of order {p} in {A_n} are conjugate to each other in {A_n} (and not just in {S_n}); in particular, a cycle is conjugate to its {j^{th}} power for all {j=1,\ldots,p-1}. Also, as {n \geq 5}, {A_n} is simple, and so the cycles of order {p} generate the entire group.)

Remark 3 By using more precise information on the representations of the alternating group (using the theory of Specht modules and Young tableaux), one can show the slightly sharper statement that {A_n} is {n-1}-quasirandom for {n \geq 6} (but is only {3}-quasirandom for {n=5} due to icosahedral symmetry, and {1}-quasirandom for {n \leq 4} due to lack of perfectness). Using Exercise 3 with the index {n} subgroup {A_{n-1}}, we see that the bound {n-1} cannot be improved. Thus, {A_n} (for large {n}) is not as quasirandom as the special linear groups {SL_d(F_p)} (for {p} large and {d} bounded), because in the latter case the quasirandomness is as strong as a power of the size of the group, whereas in the former case it is only logarithmic in size.

If one replaces the alternating group {A_n} with the slightly larger symmetric group {S_n}, then quasirandomness is destroyed (since {S_n}, having the abelian quotient {S_n/A_n}, is not perfect); indeed, {S_n} is {1}-quasirandom and no better.

Remark 4 Thanks to the monumental achievement of the classification of finite simple groups, we know that apart from a finite number (26, to be precise) of sporadic exceptions, all finite simple groups (up to isomorphism) are either a cyclic group {{\bf Z}/p{\bf Z}}, an alternating group {A_n}, or is a finite simple group of Lie type such as {PSL_d(F_p)}. (We will define the concept of a finite simple group of Lie type more precisely in later notes, but suffice to say for now that such groups are constructed from reductive algebraic groups, for instance {PSL_d(F_p)} is constructed from {SL_d} in characteristic {p}.) In the case of finite simple groups {G} of Lie type with bounded rank {r=O(1)}, it is known from the work of Landazuri and Seitz that such groups are {\gg |G|^c}-quasirandom for some {c>0} depending only on the rank. On the other hand, by the previous remark, the large alternating groups do not have this property, and one can show that the finite simple groups of Lie type with large rank also do not have this property. Thus, we see using the classification that if a finite simple group {G} is {|G|^c}-quasirandom for some {c>0} and {|G|} is sufficiently large depending on {c}, then {G} is a finite simple group of Lie type with rank {O_c(1)}. It would be of interest to see if there was an alternate way to establish this fact that did not rely on the classification, as it may lead to an alternate approach to proving the classification (or perhaps a weakened version thereof).

A key reason why quasirandomness is desirable for the purposes of demonstrating expansion is that quasirandom groups happen to be rapidly mixing at large scales, as we shall see below the fold. As such, quasirandomness is an important tool for demonstrating expansion in Cayley graphs, though because expansion is a phenomenon that must hold at all scales, one needs to supplement quasirandomness with some additional input that creates mixing at small or medium scales also before one can deduce expansion. As an example of this technique of combining quasirandomness with mixing at small and medium scales, we present a proof (due to Sarnak-Xue, and simplified by Gamburd) of a weak version of the famous “3/16 theorem” of Selberg on the least non-trivial eigenvalue of the Laplacian on a modular curve, which among other things can be used to construct a family of expander Cayley graphs in {SL_2({\bf Z}/N{\bf Z})} (compare this with the property (T)-based methods in the previous notes, which could construct expander Cayley graphs in {SL_d({\bf Z}/N{\bf Z})} for any fixed {d \geq 3}).

Read the rest of this entry »

The objective of this course is to present a number of recent constructions of expander graphs, which are a type of sparse but “pseudorandom” graph of importance in computer science, the theory of random walks, geometric group theory, and in number theory. The subject of expander graphs and their applications is an immense one, and we will not possibly be able to cover it in full in this course. In particular, we will say almost nothing about the important applications of expander graphs to computer science, for instance in constructing good pseudorandom number generators, derandomising a probabilistic algorithm, constructing error correcting codes, or in building probabilistically checkable proofs. For such topics, I recommend the survey of Hoory-Linial-Wigderson. We will also only pass very lightly over the other applications of expander graphs, though if time permits I may discuss at the end of the course the application of expander graphs in finite groups such as {SL_2(F_p)} to certain sieving problems in analytic number theory, following the work of Bourgain, Gamburd, and Sarnak.

Instead of focusing on applications, this course will concern itself much more with the task of constructing expander graphs. This is a surprisingly non-trivial problem. On one hand, an easy application of the probabilistic method shows that a randomly chosen (large, regular, bounded-degree) graph will be an expander graph with very high probability, so expander graphs are extremely abundant. On the other hand, in many applications, one wants an expander graph that is more deterministic in nature (requiring either no or very few random choices to build), and of a more specialised form. For the applications to number theory or geometric group theory, it is of particular interest to determine the expansion properties of a very symmetric type of graph, namely a Cayley graph; we will also occasionally work with the more general concept of a Schreier graph. It turns out that such questions are related to deep properties of various groups {G} of Lie type (such as {SL_2({\bf R})} or {SL_2({\bf Z})}), such as Kazhdan’s property (T), the first nontrivial eigenvalue of a Laplacian on a symmetric space {G/\Gamma} associated to {G}, the quasirandomness of {G} (as measured by the size of irreducible representations), and the product theory of subsets of {G}. These properties are of intrinsic interest to many other fields of mathematics (e.g. ergodic theory, operator algebras, additive combinatorics, representation theory, finite group theory, number theory, etc.), and it is quite remarkable that a single problem – namely the construction of expander graphs – is so deeply connected with such a rich and diverse array of mathematical topics. (Perhaps this is because so many of these fields are all grappling with aspects of a single general problem in mathematics, namely when to determine whether a given mathematical object or process of interest “behaves pseudorandomly” or not, and how this is connected with the symmetry group of that object or process.)

(There are also other important constructions of expander graphs that are not related to Cayley or Schreier graphs, such as those graphs constructed by the zigzag product construction, but we will not discuss those types of graphs in this course, again referring the reader to the survey of Hoory, Linial, and Wigderson.)

Read the rest of this entry »

Emmanuel Breuillard, Ben Green, and I have just uploaded to the arXiv the paper “Suzuki groups as expanders“, to be submitted. The purpose of this paper is to finish off the last case of the following theorem:

Theorem 1 (All finite simple groups have expanders) For every finite simple non-abelian group {G}, there exists a set of generators {S} of cardinality bounded uniformly in {G}, such that the Cayley graph {\hbox{Cay}(G,S)} on {G} generated by {S} (i.e. the graph that connects {g} with {sg} for all {g \in G} and {s \in S}) has expansion constant bounded away from zero uniformly in {G}, or equivalently that {|A \cdot S| \geq (1+\epsilon) |A|} for all {A \subset G} with {|A| < |G|/2} and some {\epsilon>0} independent of {G}.

To put in an essentially equivalent way, one can quickly generate a random element of a finite simple group with a near-uniform distribution by multiplying together a few ({O(\log |G|)}, to be more precise) randomly chosen elements of a fixed set {S}. (The most well-known instance of this phenomenon is the famous result of Bayer and Diaconis that one can shuffle a 52-card deck reasonably well after seven riffle shuffles, and almost perfectly after ten.) Note that the abelian simple groups {{\bf Z}/p{\bf Z}} do not support expanders due to the slow mixing time of random walks in the abelian setting.

The first step in proving this theorem is, naturally enough, the classification of finite simple groups. The sporadic groups have bounded cardinality and are a trivial case of this theorem, so one only has to deal with the seventeen infinite families of finite non-abelian simple groups. With one exception, the groups {G} in all of these families contain a copy of {SL_2({\bf F}_q)} for some {q} that goes to infinity as {|G| \rightarrow \infty}. Using this and several other non-trivial tools (such as Kazhdan’s property (T) and a deep model-theoretic result of Hrushovski and Pillay), the above theorem was proven for all groups outside of this exceptional family by a series of works culminating in the paper of Kassabov, Lubotzky, and Nikolov.

The exceptional family is the family of Suzuki groups {Sz(q)}, where {q = 2^{2n+1}} is an odd power of {2}. The Suzuki group {Sz(q)} can be viewed explicitly as a subgroup of the symplectic group {Sp_4(q)} and has cardinality {q^2 (q^2+1)(q-1) \approx q^5}. This cardinality is not divisible by {3}, whereas all groups of the form {SL_2(k)} have cardinality divisible by {3}; thus Suzuki groups do not contain copies of {SL_2} and the Kassabov-Lubotsky-Nikolov argument does not apply.

Our main result is that the Suzuki groups also support expanders, thus completing the last case of the above theorem. In fact we can pick just two random elements {a, b} of the Suzuki group, and with probability {1-o_{q \rightarrow \infty}(1)}, the Cayley graph generated by {S = \{a,b,a^{-1},b^{-1}\}} will be an expander uniformly in {q}. (As stated in the paper of Kassabov-Lubotsky-Nikolov, the methods in that paper should give an upper bound on {S} which they conservatively estimate to be {1000}.)

Our methods are different, instead following closely the arguments of Bourgain and Gamburd, which established the analogue of our result (i.e. that two random elements generate an expander graph) for the family of groups {SL_2({\bf F}_p)} ({p} a large prime); the arguments there have since been generalised to several other major families of groups, and our result here can thus be viewed as one further such generalisation. Roughly speaking, the strategy is as follows. Let {\mu} be the uniform probability measure on the randomly chosen set of generators {S}, and let {\mu^{(n)}} be the {n}-fold convolution. We need {\mu^{(n)}} to converge rapidly to the uniform measure on {G} (with a mixing time of {O(\log |G|)}). There are three steps to obtain this mixing:

  • (Early period) When {n \sim c \log |G|} for some small {c > 0}, one wants {\mu^{(n)}} to spread out a little bit in the sense that no individual element of {G} is assigned a mass of any more than {|G|^{-\epsilon}} for some fixed {\epsilon > 0}. More generally, no proper subgroup {H} of {G} should be assigned a mass of more than {|G|^{-\epsilon}}.
  • (Middle period) Once {\mu^{(n)}} is somewhat spread out, one should be able to convolve this measure with itself a bounded number of times and conclude that the measure {\mu^{(Cn)}} for some suitable {C} is reasonably spread out in the sense that its {L^2} norm is comparable (up to powers of {|G|^{\epsilon}} for some small {\epsilon > 0}) to the {L^2} norm of the uniform distribution.
  • (Late period) Once {\mu^{(n)}} is reasonably spread out, a few more convolutions should make it extremely close to uniform (e.g. within {|G|^{-10}} in the {L^\infty} norm).

The late period claim is easy to establish from Gowers’ theory of quasirandom groups, the key point being that (like all other finite simple nonabelian groups), the Suzuki groups do not admit any non-trivial low-dimensional irreducible representations (we can for instance use a precise lower bound of {\gg q^{3/2}}, due to Landazuri and Seitz). (One can also proceed here using a trace formula argument of Sarnak and Xue; the two approaches are basically equivalent.) The middle period reduces, by a variant of the Balog-Szemerédi-Gowers lemma, to a product estimate in {Sz(q)} which was recently established by Pyber-Szábo and can also be obtained by the methods of proof of the results announced by ourselves. (These arguments are in turn based on an earlier result of Helfgott establishing the analogous claim for {SL_2({\bf F}_p)}.) This requires checking that {Sz(q)} is a “sufficiently Zariski dense” subgroup of the finite Lie group {Sp_4(q)}, but this can be done using an explicit description of the Suzuki group and the Schwartz-Zippel lemma.

The main difficulty is then to deal with the early period, obtaining some initial non-concentration in the random walk associated to {S} away from subgroups {H} of {Sz(q)}. These subgroups have been classified for some time (see e.g. the book of Wilson); they split into two families, the algebraic subgroups, which in the Suzuki case turn out to be solvable of derived length at most three, and the arithmetic subgroups, which are conjugate to {Sz(q_0)}, where {{\bf F}_{q_0}} is a subfield of {{\bf F}_q}.

In the algebraic case, one can prevent concentration using a lower bound on the girth of random Cayley graphs due to Gamburd, Hoory, Shahshahani, Shalev, and Virág (and we also provide an independent proof of this fact for completeness, which fortunately is able to avoid any really deep technology, such as Lang-Weil estimates); this follows an analogous argument of Bourgain-Gamburd in the {SL_2} case fairly closely, and is ultimately based on the fact that all the algebraic subgroups obey a fixed law (in this case, the law arises from the solvability). In the arithmetic case, the main task is to show that the coefficients of the characteristic polynomial of a typical word in {S} does not fall into a proper subfield of {{\bf F}_q}, but this can be accomplished by a variant of the Schwartz-Zippel lemma.

Van Vu and I have just uploaded to the arXiv our preprint “A sharp inverse Littlewood-Offord theorem“, which we have submitted to Random Structures and Algorithms.  This paper gives a solution to the (inverse) Littlewood-Offord problem of understanding when random walks are concentrated in the case when the concentration is of polynomial size in the length n of the walk; our description is sharp up to epsilon powers of n.  The theory of inverse Littlewood-Offord problems and related topics has been of importance in recent developments in the spectral theory of discrete random matrices (e.g. a “robust” variant of these theorems was crucial in our work on the circular law).

For simplicity I will restrict attention to the Bernoulli random walk.  Given n real numbers v_1,\ldots,v_n, one can form the random variable

S := \epsilon_1 v_1 + \ldots + \epsilon_n v_n

where \epsilon_1,\ldots,\epsilon_n \in \{-1,+1\} are iid random signs (with either sign +1, -1 chosen with probability 1/2).  This is a discrete random variable which typically takes 2^n values.  However, if there are various arithmetic relations between the step sizes v_1,\ldots,v_n, then many of the 2^n possible sums collide, and certain values may then arise with much higher probability.  To measure this, define the concentration probability p(v_1,\ldots,v_n) by the formula

p(v_1,\ldots,v_n) = \sup_x {\Bbb P}(S=x).

Intuitively, this probability measures the amount of additive structure present between the v_1,\ldots,v_n.  There are two (opposing) problems in the subject:

  • (Forward Littlewood-Offord problem) Given some structural assumptions on v_1,\ldots,v_n, what bounds can one place on p(v_1,\ldots,v_n)?
  • (Inverse Littlewood-Offord problem) Given some bounds on p(v_1,\ldots,v_n), what structural assumptions can one then conclude about v_1,\ldots,v_n?

Ideally one would like answers to both of these problems which come close to inverting each other, and this is the guiding motivation for our paper.

Read the rest of this entry »

Archives