The Poincaré upper half-plane {{\mathbf H} := \{ z: \hbox{Im}(z) > 0 \}} (with a boundary consisting of the real line {{\bf R}} together with the point at infinity {\infty}) carries an action of the projective special linear group

\displaystyle  \hbox{PSL}_2({\bf R}) := \{ \begin{pmatrix} a & b \\ c & d \end{pmatrix}: a,b,c,d \in {\bf R}: ad-bc = 1 \} / \{\pm 1\}

via fractional linear transformations:

\displaystyle  \begin{pmatrix} a & b \\ c & d \end{pmatrix} z := \frac{az+b}{cz+d}. \ \ \ \ \ (1)

Here and in the rest of the post we will abuse notation by identifying elements {\begin{pmatrix} a & b \\ c & d \end{pmatrix}} of the special linear group {\hbox{SL}_2({\bf R})} with their equivalence class {\{ \pm \begin{pmatrix} a & b \\ c & d \end{pmatrix} \}} in {\hbox{PSL}_2({\bf R})}; this will occasionally create or remove a factor of two in our formulae, but otherwise has very little effect, though one has to check that various definitions and expressions (such as (1)) are unaffected if one replaces a matrix {\begin{pmatrix} a & b \\ c & d \end{pmatrix}} by its negation {\begin{pmatrix} -a & -b \\ -c & -d \end{pmatrix}}. In particular, we recommend that the reader ignore the signs {\pm} that appear from time to time in the discussion below.

As the action of {\hbox{PSL}_2({\bf R})} on {{\mathbf H}} is transitive, and any given point in {{\mathbf H}} (e.g. {i}) has a stabiliser isomorphic to the projective rotation group {\hbox{PSO}_2({\bf R})}, we can view the Poincaré upper half-plane {{\mathbf H}} as a homogeneous space for {\hbox{PSL}_2({\bf R})}, and more specifically the quotient space of {\hbox{PSL}_2({\bf R})} of a maximal compact subgroup {\hbox{PSO}_2({\bf R})}. In fact, we can make the half-plane a symmetric space for {\hbox{PSL}_2({\bf R})}, by endowing {{\mathbf H}} with the Riemannian metric

\displaystyle  dg^2 := \frac{dx^2 + dy^2}{y^2}

(using Cartesian coordinates {z=x+iy}), which is invariant with respect to the {\hbox{PSL}_2({\bf R})} action. Like any other Riemannian metric, the metric on {{\mathbf H}} generates a number of other important geometric objects on {{\mathbf H}}, such as the distance function {d(z,w)} which can be computed to be given by the formula

\displaystyle  2(\cosh(d(z_1,z_2))-1) = \frac{|z_1-z_2|^2}{\hbox{Im}(z_1) \hbox{Im}(z_2)}, \ \ \ \ \ (2)

the volume measure {\mu = \mu_{\mathbf H}}, which can be computed to be

\displaystyle  d\mu = \frac{dx dy}{y^2},

and the Laplace-Beltrami operator, which can be computed to be {\Delta = y^2 (\frac{\partial^2}{\partial x^2} + \frac{\partial^2}{\partial y^2})} (here we use the negative definite sign convention for {\Delta}). As the metric {dg} was {\hbox{PSL}_2({\bf R})}-invariant, all of these quantities arising from the metric are similarly {\hbox{PSL}_2({\bf R})}-invariant in the appropriate sense.

The Gauss curvature of the Poincaré half-plane can be computed to be the constant {-1}, thus {{\mathbf H}} is a model for two-dimensional hyperbolic geometry, in much the same way that the unit sphere {S^2} in {{\bf R}^3} is a model for two-dimensional spherical geometry (or {{\bf R}^2} is a model for two-dimensional Euclidean geometry). (Indeed, {{\mathbf H}} is isomorphic (via projection to a null hyperplane) to the upper unit hyperboloid {\{ (x,t) \in {\bf R}^{2+1}: t = \sqrt{1+|x|^2}\}} in the Minkowski spacetime {{\bf R}^{2+1}}, which is the direct analogue of the unit sphere in Euclidean spacetime {{\bf R}^3} or the plane {{\bf R}^2} in Galilean spacetime {{\bf R}^2 \times {\bf R}}.)

One can inject arithmetic into this geometric structure by passing from the Lie group {\hbox{PSL}_2({\bf R})} to the full modular group

\displaystyle  \hbox{PSL}_2({\bf Z}) := \{ \begin{pmatrix} a & b \\ c & d \end{pmatrix}: a,b,c,d \in {\bf Z}: ad-bc = 1 \} / \{\pm 1\}

or congruence subgroups such as

\displaystyle  \Gamma_0(q) := \{ \begin{pmatrix} a & b \\ c & d \end{pmatrix} \in \hbox{PSL}_2({\bf Z}): c = 0\ (q) \} / \{ \pm 1 \} \ \ \ \ \ (3)

for natural number {q}, or to the discrete stabiliser {\Gamma_\infty} of the point at infinity:

\displaystyle  \Gamma_\infty := \{ \pm \begin{pmatrix} 1 & b \\ 0 & 1 \end{pmatrix}: b \in {\bf Z} \} / \{\pm 1\}. \ \ \ \ \ (4)

These are discrete subgroups of {\hbox{PSL}_2({\bf R})}, nested by the subgroup inclusions

\displaystyle  \Gamma_\infty \leq \Gamma_0(q) \leq \Gamma_0(1)=\hbox{PSL}_2({\bf Z}) \leq \hbox{PSL}_2({\bf R}).

There are many further discrete subgroups of {\hbox{PSL}_2({\bf R})} (known collectively as Fuchsian groups) that one could consider, but we will focus attention on these three groups in this post.

Any discrete subgroup {\Gamma} of {\hbox{PSL}_2({\bf R})} generates a quotient space {\Gamma \backslash {\mathbf H}}, which in general will be a non-compact two-dimensional orbifold. One can understand such a quotient space by working with a fundamental domain {\hbox{Fund}( \Gamma \backslash {\mathbf H})} – a set consisting of a single representative of each of the orbits {\Gamma z} of {\Gamma} in {{\mathbf H}}. This fundamental domain is by no means uniquely defined, but if the fundamental domain is chosen with some reasonable amount of regularity, one can view {\Gamma \backslash {\mathbf H}} as the fundamental domain with the boundaries glued together in an appropriate sense. Among other things, fundamental domains can be used to induce a volume measure {\mu = \mu_{\Gamma \backslash {\mathbf H}}} on {\Gamma \backslash {\mathbf H}} from the volume measure {\mu = \mu_{\mathbf H}} on {{\mathbf H}} (restricted to a fundamental domain). By abuse of notation we will refer to both measures simply as {\mu} when there is no chance of confusion.

For instance, a fundamental domain for {\Gamma_\infty \backslash {\mathbf H}} is given (up to null sets) by the strip {\{ z \in {\mathbf H}: |\hbox{Re}(z)| < \frac{1}{2} \}}, with {\Gamma_\infty \backslash {\mathbf H}} identifiable with the cylinder formed by gluing together the two sides of the strip. A fundamental domain for {\hbox{PSL}_2({\bf Z}) \backslash {\mathbf H}} is famously given (again up to null sets) by an upper portion {\{ z \in {\mathbf H}: |\hbox{Re}(z)| < \frac{1}{2}; |z| > 1 \}}, with the left and right sides again glued to each other, and the left and right halves of the circular boundary glued to itself. A fundamental domain for {\Gamma_0(q) \backslash {\mathbf H}} can be formed by gluing together

\displaystyle  [\hbox{PSL}_2({\bf Z}) : \Gamma_0(q)] = q \prod_{p|q} (1 + \frac{1}{p}) = q^{1+o(1)}

copies of a fundamental domain for {\hbox{PSL}_2({\bf Z}) \backslash {\mathbf H}} in a rather complicated but interesting fashion.

While fundamental domains can be a convenient choice of coordinates to work with for some computations (as well as for drawing appropriate pictures), it is geometrically more natural to avoid working explicitly on such domains, and instead work directly on the quotient spaces {\Gamma \backslash {\mathbf H}}. In order to analyse functions {f: \Gamma \backslash {\mathbf H} \rightarrow {\bf C}} on such orbifolds, it is convenient to lift such functions back up to {{\mathbf H}} and identify them with functions {f: {\mathbf H} \rightarrow {\bf C}} which are {\Gamma}-automorphic in the sense that {f( \gamma z ) = f(z)} for all {z \in {\mathbf H}} and {\gamma \in \Gamma}. Such functions will be referred to as {\Gamma}-automorphic forms, or automorphic forms for short (we always implicitly assume all such functions to be measurable). (Strictly speaking, these are the automorphic forms with trivial factor of automorphy; one can certainly consider other factors of automorphy, particularly when working with holomorphic modular forms, which corresponds to sections of a more non-trivial line bundle over {\Gamma \backslash {\mathbf H}} than the trivial bundle {(\Gamma \backslash {\mathbf H}) \times {\bf C}} that is implicitly present when analysing scalar functions {f: {\mathbf H} \rightarrow {\bf C}}. However, we will not discuss this (important) more general situation here.)

An important way to create a {\Gamma}-automorphic form is to start with a non-automorphic function {f: {\mathbf H} \rightarrow {\bf C}} obeying suitable decay conditions (e.g. bounded with compact support will suffice) and form the Poincaré series {P_\Gamma[f]: {\mathbf H} \rightarrow {\bf C}} defined by

\displaystyle  P_{\Gamma}[f](z) = \sum_{\gamma \in \Gamma} f(\gamma z),

which is clearly {\Gamma}-automorphic. (One could equivalently write {f(\gamma^{-1} z)} in place of {f(\gamma z)} here; there are good argument for both conventions, but I have ultimately decided to use the {f(\gamma z)} convention, which makes explicit computations a little neater at the cost of making the group actions work in the opposite order.) Thus we naturally see sums over {\Gamma} associated with {\Gamma}-automorphic forms. A little more generally, given a subgroup {\Gamma_\infty} of {\Gamma} and a {\Gamma_\infty}-automorphic function {f: {\mathbf H} \rightarrow {\bf C}} of suitable decay, we can form a relative Poincaré series {P_{\Gamma_\infty \backslash \Gamma}[f]: {\mathbf H} \rightarrow {\bf C}} by

\displaystyle  P_{\Gamma_\infty \backslash \Gamma}[f](z) = \sum_{\gamma \in \hbox{Fund}(\Gamma_\infty \backslash \Gamma)} f(\gamma z)

where {\hbox{Fund}(\Gamma_\infty \backslash \Gamma)} is any fundamental domain for {\Gamma_\infty \backslash \Gamma}, that is to say a subset of {\Gamma} consisting of exactly one representative for each right coset of {\Gamma_\infty}. As {f} is {\Gamma_\infty}-automorphic, we see (if {f} has suitable decay) that {P_{\Gamma_\infty \backslash \Gamma}[f]} does not depend on the precise choice of fundamental domain, and is {\Gamma}-automorphic. These operations are all compatible with each other, for instance {P_\Gamma = P_{\Gamma_\infty \backslash \Gamma} \circ P_{\Gamma_\infty}}. A key example of Poincaré series are the Eisenstein series, although there are of course many other Poincaré series one can consider by varying the test function {f}.

For future reference we record the basic but fundamental unfolding identities

\displaystyle  \int_{\Gamma \backslash {\mathbf H}} P_\Gamma[f] g\ d\mu_{\Gamma \backslash {\mathbf H}} = \int_{\mathbf H} f g\ d\mu_{\mathbf H} \ \ \ \ \ (5)

for any function {f: {\mathbf H} \rightarrow {\bf C}} with sufficient decay, and any {\Gamma}-automorphic function {g} of reasonable growth (e.g. {f} bounded and compact support, and {g} bounded, will suffice). Note that {g} is viewed as a function on {\Gamma \backslash {\mathbf H}} on the left-hand side, and as a {\Gamma}-automorphic function on {{\mathbf H}} on the right-hand side. More generally, one has

\displaystyle  \int_{\Gamma \backslash {\mathbf H}} P_{\Gamma_\infty \backslash \Gamma}[f] g\ d\mu_{\Gamma \backslash {\mathbf H}} = \int_{\Gamma_\infty \backslash {\mathbf H}} f g\ d\mu_{\Gamma_\infty \backslash {\mathbf H}} \ \ \ \ \ (6)

whenever {\Gamma_\infty \leq \Gamma} are discrete subgroups of {\hbox{PSL}_2({\bf R})}, {f} is a {\Gamma_\infty}-automorphic function with sufficient decay on {\Gamma_\infty \backslash {\mathbf H}}, and {g} is a {\Gamma}-automorphic (and thus also {\Gamma_\infty}-automorphic) function of reasonable growth. These identities will allow us to move fairly freely between the three domains {{\mathbf H}}, {\Gamma_\infty \backslash {\mathbf H}}, and {\Gamma \backslash {\mathbf H}} in our analysis.

When computing various statistics of a Poincaré series {P_\Gamma[f]}, such as its values {P_\Gamma[f](z)} at special points {z}, or the {L^2} quantity {\int_{\Gamma \backslash {\mathbf H}} |P_\Gamma[f]|^2\ d\mu}, expressions of interest to analytic number theory naturally emerge. We list three basic examples of this below, discussed somewhat informally in order to highlight the main ideas rather than the technical details.

The first example we will give concerns the problem of estimating the sum

\displaystyle  \sum_{n \leq x} \tau(n) \tau(n+1), \ \ \ \ \ (7)

where {\tau(n) := \sum_{d|n} 1} is the divisor function. This can be rewritten (by factoring {n=bc} and {n+1=ad}) as

\displaystyle  \sum_{ a,b,c,d \in {\bf N}: ad-bc = 1} 1_{bc \leq x} \ \ \ \ \ (8)

which is basically a sum over the full modular group {\hbox{PSL}_2({\bf Z})}. At this point we will “cheat” a little by moving to the related, but different, sum

\displaystyle  \sum_{a,b,c,d \in {\bf Z}: ad-bc = 1} 1_{a^2+b^2+c^2+d^2 \leq x}. \ \ \ \ \ (9)

This sum is not exactly the same as (8), but will be a little easier to handle, and it is plausible that the methods used to handle this sum can be modified to handle (8). Observe from (2) and some calculation that the distance between {i} and {\begin{pmatrix} a & b \\ c & d \end{pmatrix} i = \frac{ai+b}{ci+d}} is given by the formula

\displaystyle  2(\cosh(d(i,\begin{pmatrix} a & b \\ c & d \end{pmatrix} i))-1) = a^2+b^2+c^2+d^2 - 2

and so one can express the above sum as

\displaystyle  2 \sum_{\gamma \in \hbox{PSL}_2({\bf Z})} 1_{d(i,\gamma i) \leq \hbox{cosh}^{-1}(x/2)}

(the factor of {2} coming from the quotient by {\{\pm 1\}} in the projective special linear group); one can express this as {P_\Gamma[f](i)}, where {\Gamma = \hbox{PSL}_2({\bf Z})} and {f} is the indicator function of the ball {B(i, \hbox{cosh}^{-1}(x/2))}. Thus we see that expressions such as (7) are related to evaluations of Poincaré series. (In practice, it is much better to use smoothed out versions of indicator functions in order to obtain good control on sums such as (7) or (9), but we gloss over this technical detail here.)

The second example concerns the relative

\displaystyle  \sum_{n \leq x} \tau(n^2+1) \ \ \ \ \ (10)

of the sum (7). Note from multiplicativity that (7) can be written as {\sum_{n \leq x} \tau(n^2+n)}, which is superficially very similar to (10), but with the key difference that the polynomial {n^2+1} is irreducible over the integers.

As with (7), we may expand (10) as

\displaystyle  \sum_{A,B,C \in {\bf N}: B^2 - AC = -1} 1_{B \leq x}.

At first glance this does not look like a sum over a modular group, but one can manipulate this expression into such a form in one of two (closely related) ways. First, observe that any factorisation {B + i = (a-bi) (c+di)} of {B+i} into Gaussian integers {a-bi, c+di} gives rise (upon taking norms) to an identity of the form {B^2 - AC = -1}, where {A = a^2+b^2} and {C = c^2+d^2}. Conversely, by using the unique factorisation of the Gaussian integers, every identity of the form {B^2-AC=-1} gives rise to a factorisation of the form {B+i = (a-bi) (c+di)}, essentially uniquely up to units. Now note that {(a-bi)(c+di)} is of the form {B+i} if and only if {ad-bc=1}, in which case {B = ac+bd}. Thus we can essentially write the above sum as something like

\displaystyle  \sum_{a,b,c,d: ad-bc = 1} 1_{|ac+bd| \leq x} \ \ \ \ \ (11)

and one the modular group {\hbox{PSL}_2({\bf Z})} is now manifest. An equivalent way to see these manipulations is as follows. A triple {A,B,C} of natural numbers with {B^2-AC=1} gives rise to a positive quadratic form {Ax^2+2Bxy+Cy^2} of normalised discriminant {B^2-AC} equal to {-1} with integer coefficients (it is natural here to allow {B} to take integer values rather than just natural number values by essentially doubling the sum). The group {\hbox{PSL}_2({\bf Z})} acts on the space of such quadratic forms in a natural fashion (by composing the quadratic form with the inverse {\begin{pmatrix} d & -b \\ -c & a \end{pmatrix}} of an element {\begin{pmatrix} a & b \\ c & d \end{pmatrix}} of {\hbox{SL}_2({\bf Z})}). Because the discriminant {-1} has class number one (this fact is equivalent to the unique factorisation of the gaussian integers, as discussed in this previous post), every form {Ax^2 + 2Bxy + Cy^2} in this space is equivalent (under the action of some element of {\hbox{PSL}_2({\bf Z})}) with the standard quadratic form {x^2+y^2}. In other words, one has

\displaystyle  Ax^2 + 2Bxy + Cy^2 = (dx-by)^2 + (-cx+ay)^2

which (up to a harmless sign) is exactly the representation {B = ac+bd}, {A = c^2+d^2}, {C = a^2+b^2} introduced earlier, and leads to the same reformulation of the sum (10) in terms of expressions like (11). Similar considerations also apply if the quadratic polynomial {n^2+1} is replaced by another quadratic, although one has to account for the fact that the class number may now exceed one (so that unique factorisation in the associated quadratic ring of integers breaks down), and in the positive discriminant case the fact that the group of units might be infinite presents another significant technical problem.

Note that {\begin{pmatrix} a & b \\ c & d \end{pmatrix} i = \frac{ai+b}{ci+d}} has real part {\frac{ac+bd}{c^2+d^2}} and imaginary part {\frac{1}{c^2+d^2}}. Thus (11) is (up to a factor of two) the Poincaré series {P_\Gamma[f](i)} as in the preceding example, except that {f} is now the indicator of the sector {\{ z: |\hbox{Re} z| \leq x |\hbox{Im} z| \}}.

Sums involving subgroups of the full modular group, such as {\Gamma_0(q)}, often arise when imposing congruence conditions on sums such as (10), for instance when trying to estimate the expression {\sum_{n \leq x: q|n} \tau(n^2+1)} when {q} and {x} are large. As before, one then soon arrives at the problem of evaluating a Poincaré series at one or more special points, where the series is now over {\Gamma_0(q)} rather than {\hbox{PSL}_2({\bf Z})}.

The third and final example concerns averages of Kloosterman sums

\displaystyle  S(m,n;c) := \sum_{x \in ({\bf Z}/c{\bf Z})^\times} e( \frac{mx + n\overline{x}}{c} ) \ \ \ \ \ (12)

where {e(\theta) := e^{2p\i i\theta}} and {\overline{x}} is the inverse of {x} in the multiplicative group {({\bf Z}/c{\bf Z})^\times}. It turns out that the {L^2} norms of Poincaré series {P_\Gamma[f]} or {P_{\Gamma_\infty \backslash \Gamma}[f]} are closely tied to such averages. Consider for instance the quantity

\displaystyle  \int_{\Gamma_0(q) \backslash {\mathbf H}} |P_{\Gamma_\infty \backslash \Gamma_0(q)}[f]|^2\ d\mu_{\Gamma \backslash {\mathbf H}} \ \ \ \ \ (13)

where {q} is a natural number and {f} is a {\Gamma_\infty}-automorphic form that is of the form

\displaystyle  f(x+iy) = F(my) e(m x)

for some integer {m} and some test function {f: (0,+\infty) \rightarrow {\bf C}}, which for sake of discussion we will take to be smooth and compactly supported. Using the unfolding formula (6), we may rewrite (13) as

\displaystyle  \int_{\Gamma_\infty \backslash {\mathbf H}} \overline{f} P_{\Gamma_\infty \backslash \Gamma_0(q)}[f]\ d\mu_{\Gamma_\infty \backslash {\mathbf H}}.

To compute this, we use the double coset decomposition

\displaystyle  \Gamma_0(q) = \Gamma_\infty \cup \bigcup_{c \in {\mathbf N}: q|c} \bigcup_{1 \leq d \leq c: (d,c)=1} \Gamma_\infty \begin{pmatrix} a & b \\ c & d \end{pmatrix} \Gamma_\infty,

where for each {c,d}, {a,b} are arbitrarily chosen integers such that {ad-bc=1}. To see this decomposition, observe that every element {\begin{pmatrix} a & b \\ c & d \end{pmatrix}} in {\Gamma_0(q)} outside of {\Gamma_\infty} can be assumed to have {c>0} by applying a sign {\pm}, and then using the row and column operations coming from left and right multiplication by {\Gamma_\infty} (that is, shifting the top row by an integer multiple of the bottom row, and shifting the right column by an integer multiple of the left column) one can place {d} in the interval {[1,c]} and {(a,b)} to be any specified integer pair with {ad-bc=1}. From this we see that

\displaystyle  P_{\Gamma_\infty \backslash \Gamma_0(q)}[f] = f + \sum_{c \in {\mathbf N}: q|c} \sum_{1 \leq d \leq c: (d,c)=1} P_{\Gamma_\infty}[ f( \begin{pmatrix} a & b \\ c & d \end{pmatrix} \cdot ) ]

and so from further use of the unfolding formula (5) we may expand (13) as

\displaystyle  \int_{\Gamma_\infty \backslash {\mathbf H}} |f|^2\ d\mu_{\Gamma_\infty \backslash {\mathbf H}}

\displaystyle  + \sum_{c \in {\mathbf N}} \sum_{1 \leq d \leq c: (d,c)=1} \int_{\mathbf H} \overline{f}(z) f( \begin{pmatrix} a & b \\ c & d \end{pmatrix} z)\ d\mu_{\mathbf H}.

The first integral is just {m \int_0^\infty |F(y)|^2 \frac{dy}{y^2}}. The second expression is more interesting. We have

\displaystyle  \begin{pmatrix} a & b \\ c & d \end{pmatrix} z = \frac{az+b}{cz+d} = \frac{a}{c} - \frac{1}{c(cz+d)}

\displaystyle  = \frac{a}{c} - \frac{cx+d}{c((cx+d)^2+c^2y^2)} + \frac{iy}{(cx+d)^2 + c^2y^2}

so we can write

\displaystyle  \int_{\mathbf H} \overline{f}(z) f( \begin{pmatrix} a & b \\ c & d \end{pmatrix} z)\ d\mu_{\mathbf H}


\displaystyle  \int_0^\infty \int_{\bf R} \overline{F}(my) F(\frac{imy}{(cx+d)^2 + c^2y^2}) e( -mx + \frac{ma}{c} - m \frac{cx+d}{c((cx+d)^2+c^2y^2)} )

\displaystyle \frac{dx dy}{y^2}

which on shifting {x} by {d/c} simplifies a little to

\displaystyle  e( \frac{ma}{c} + \frac{md}{c} ) \int_0^\infty \int_{\bf R} F(my) \bar{F}(\frac{imy}{c^2(x^2 + y^2)}) e(- mx - m \frac{x}{c^2(x^2+y^2)} )

\displaystyle  \frac{dx dy}{y^2}

and then on scaling {x,y} by {m} simplifies a little further to

\displaystyle  e( \frac{ma}{c} + \frac{md}{c} ) \int_0^\infty \int_{\bf R} F(y) \bar{F}(\frac{m^2}{c^2} \frac{iy}{x^2 + y^2}) e(- x - \frac{m^2}{c^2} \frac{x}{x^2+y^2} )\ \frac{dx dy}{y^2}.

Note that as {ad-bc=1}, we have {a = \overline{d}} modulo {c}. Comparing the above calculations with (12), we can thus write (13) as

\displaystyle  m (\int_0^\infty |F(y)|^2 \frac{dy}{y^2} + \sum_{q|c} \frac{S(m,m;c)}{c} V(\frac{m}{c})) \ \ \ \ \ (14)


\displaystyle  V(u) := \frac{1}{u} \int_0^\infty \int_{\bf R} F(y) \bar{F}(u^2 \frac{y}{x^2 + y^2}) e(- x - u^2 \frac{x}{x^2+y^2} )\ \frac{dx dy}{y^2}

is a certain integral involving {F} and a parameter {u}, but which does not depend explicitly on parameters such as {m,c,d}. Thus we have indeed expressed the {L^2} expression (13) in terms of Kloosterman sums. It is possible to invert this analysis and express varius weighted sums of Kloosterman sums in terms of {L^2} expressions (possibly involving inner products instead of norms) of Poincaré series, but we will not do so here; see Chapter 16 of Iwaniec and Kowalski for further details.

Traditionally, automorphic forms have been analysed using the spectral theory of the Laplace-Beltrami operator {-\Delta} on spaces such as {\Gamma\backslash {\mathbf H}} or {\Gamma_\infty \backslash {\mathbf H}}, so that a Poincaré series such as {P_\Gamma[f]} might be expanded out using inner products of {P_\Gamma[f]} (or, by the unfolding identities, {f}) with various generalised eigenfunctions of {-\Delta} (such as cuspidal eigenforms, or Eisenstein series). With this approach, special functions, and specifically the modified Bessel functions {K_{it}} of the second kind, play a prominent role, basically because the {\Gamma_\infty}-automorphic functions

\displaystyle  x+iy \mapsto y^{1/2} K_{it}(2\pi |m| y) e(mx)

for {t \in {\bf R}} and {m \in {\bf Z}} non-zero are generalised eigenfunctions of {-\Delta} (with eigenvalue {\frac{1}{4}+t^2}), and are almost square-integrable on {\Gamma_\infty \backslash {\mathbf H}} (the {L^2} norm diverges only logarithmically at one end {y \rightarrow 0^+} of the cylinder {\Gamma_\infty \backslash {\mathbf H}}, while decaying exponentially fast at the other end {y \rightarrow +\infty}).

However, as discussed in this previous post, the spectral theory of an essentially self-adjoint operator such as {-\Delta} is basically equivalent to the theory of various solution operators associated to partial differential equations involving that operator, such as the Helmholtz equation {(-\Delta + k^2) u = f}, the heat equation {\partial_t u = \Delta u}, the Schrödinger equation {i\partial_t u + \Delta u = 0}, or the wave equation {\partial_{tt} u = \Delta u}. Thus, one can hope to rephrase many arguments that involve spectral data of {-\Delta} into arguments that instead involve resolvents {(-\Delta + k^2)^{-1}}, heat kernels {e^{t\Delta}}, Schrödinger propagators {e^{it\Delta}}, or wave propagators {e^{\pm it\sqrt{-\Delta}}}, or involve the PDE more directly (e.g. applying integration by parts and energy methods to solutions of such PDE). This is certainly done to some extent in the existing literature; resolvents and heat kernels, for instance, are often utilised. In this post, I would like to explore the possibility of reformulating spectral arguments instead using the inhomogeneous wave equation

\displaystyle  \partial_{tt} u - \Delta u = F.

Actually it will be a bit more convenient to normalise the Laplacian by {\frac{1}{4}}, and look instead at the automorphic wave equation

\displaystyle  \partial_{tt} u + (-\Delta - \frac{1}{4}) u = F. \ \ \ \ \ (15)

This equation somewhat resembles a “Klein-Gordon” type equation, except that the mass is imaginary! This would lead to pathological behaviour were it not for the negative curvature, which in principle creates a spectral gap of {\frac{1}{4}} that cancels out this factor.

The point is that the wave equation approach gives access to some nice PDE techniques, such as energy methods, Sobolev inequalities and finite speed of propagation, which are somewhat submerged in the spectral framework. The wave equation also interacts well with Poincaré series; if for instance {u} and {F} are {\Gamma_\infty}-automorphic solutions to (15) obeying suitable decay conditions, then their Poincaré series {P_{\Gamma_\infty \backslash \Gamma}[u]} and {P_{\Gamma_\infty \backslash \Gamma}[F]} will be {\Gamma}-automorphic solutions to the same equation (15), basically because the Laplace-Beltrami operator commutes with translations. Because of these facts, it is possible to replicate several standard spectral theory arguments in the wave equation framework, without having to deal directly with things like the asymptotics of modified Bessel functions. The wave equation approach to automorphic theory was introduced by Faddeev and Pavlov (using the Lax-Phillips scattering theory), and developed further by by Lax and Phillips, to recover many spectral facts about the Laplacian on modular curves, such as the Weyl law and the Selberg trace formula. Here, I will illustrate this by deriving three basic applications of automorphic methods in a wave equation framework, namely

  • Using the Weil bound on Kloosterman sums to derive Selberg’s 3/16 theorem on the least non-trivial eigenvalue for {-\Delta} on {\Gamma_0(q) \backslash {\mathbf H}} (discussed previously here);
  • Conversely, showing that Selberg’s eigenvalue conjecture (improving Selberg’s {3/16} bound to the optimal {1/4}) implies an optimal bound on (smoothed) sums of Kloosterman sums; and
  • Using the same bound to obtain pointwise bounds on Poincaré series similar to the ones discussed above. (Actually, the argument here does not use the wave equation, instead it just uses the Sobolev inequality.)

This post originated from an attempt to finally learn this part of analytic number theory properly, and to see if I could use a PDE-based perspective to understand it better. Ultimately, this is not that dramatic a depature from the standard approach to this subject, but I found it useful to think of things in this fashion, probably due to my existing background in PDE.

I thank Bill Duke and Ben Green for helpful discussions. My primary reference for this theory was Chapters 15, 16, and 21 of Iwaniec and Kowalski.

— 1. Selberg’s {3/16} theorem —

We begin with a proof of the following celebrated result of Selberg:

Theorem 1 Let {q \geq 1} be a natural number. Then every eigenvalue of {-\Delta} on {L^2(\Gamma_0(q) \backslash {\mathbf H})_0} (the mean zero functions on {\Gamma_0(q) \backslash {\mathbf H}}) is at least {3/16}.

One can show that {-\Delta} has only pure point spectrum below {1/4} on {L^2(\Gamma_0(q) \backslash {\mathbf H})_0} (see this previous blog post for more discussion). Thus, this theorem shows that the spectrum of {-\Delta} on {L^2(\Gamma_0(q) \backslash {\mathbf H})_0} is contained in {[-3/16,+\infty)}.

We now prove this theorem. Suppose this were not the case, then we have a non-zero eigenfunction {\phi} of {-\Delta} in {L^2(\Gamma_0(q) \backslash {\mathbf H})_0} with eigenvalue {\frac{1}{4} - \delta^2} for some {\delta > \frac{1}{4}}; we may assume {\phi} to be real-valued, and by elliptic regularity it is smooth (on {{\mathbf H}}). If it is constant in the horizontal variable, thus {\phi(x+iy) = \phi(y)}, then by the {\Gamma_0(q)}-automorphic nature of {\phi} it is easy to see that {\phi} is globally constant, contradicting the fact that it is mean zero but not identically zero. Thus it is not identically constant in the horizontal variable. By Fourier analysis on the cylinder {\Gamma_\infty \backslash {\mathbf H}}, one can then find a {\Gamma_\infty}-automorphic function {f_0} of the form {f_0(x+iy) = F_0(my) e(mx)} for some non-zero integer {m} which has a non-zero inner product with {\phi} on {\Gamma_\infty \backslash {\mathbf H}}, where {F_0: (0,+\infty) \rightarrow {\bf C}} is a smooth compactly supported function.

Now we evolve {f_0} by the wave equation

\displaystyle  \partial_{tt} f - \Delta f - \frac{1}{4} f = 0 \ \ \ \ \ (16)

to obtain a smooth function {f: {\bf R} \times {\mathbf H} \rightarrow {\bf C}} such that {f(0,x) = f_0(x)} and {\partial_t f(0,x) = 0}; the existence (and uniqueness) of such a solution to this initial value problem can be established by standard wave equation methods (e.g. parametrices and energy estimates), or by the spectral theory of the Laplacian. (One can also solve for {f} explicitly in terms of modified Bessel functions, but we will not need to do so here, which is one of the main points of using the wave equation method.) Since the initial data {f_0} obeyed the translation symmetry {f_0(z + x) = f_0(z) e(mx)} for all {z \in {\mathbf H}} and {x \in {\bf R}}, we see (from the uniqueness theory and translation invariance of the wave equation) that {f} also obeys this symmetry; in particular {f(t)} is {\Gamma_\infty}-automorphic for all times {t}. By finite speed of propagation, {f(t)} remains compactly supported in {\Gamma_\infty \backslash {\mathbf H}} for all time {t}, in fact for positive time it will lie in the strip {\{ z: e^{-t} \ll \hbox{Im} z \ll e^t \}}, where we allow the implied constants to depend on the initial data {f_0}.

Taking the inner product of {f} with the eigenfunction {\phi} on {\Gamma_\infty \backslash {\mathbf H}}, differentiating under the integral sign, and integrating by parts, we see that

\displaystyle  \partial_{tt} \langle f, \phi \rangle_{L^2(\Gamma_\infty \backslash {\mathbf H})} + \delta^2 \langle f, \phi \rangle_{L^2(\Gamma_\infty \backslash {\mathbf H})} = 0.

Since {\langle f, \phi \rangle_{\Gamma \backslash {\mathbf H}}} is initially non-zero with zero velocity, we conclude from solving the ODE that {\langle f(t), \phi \rangle_{\Gamma_\infty \backslash {\mathbf H}}} is a non-zero multiple of {\cosh(\delta t)}. In particular, it grows like {e^{\delta t}} as {t \rightarrow +\infty}. Using the unfolding identity (6) to write

\displaystyle  \langle f(t), \phi \rangle_{L^2(\Gamma_\infty \backslash {\mathbf H})} = \langle P_{\Gamma_\infty \backslash \Gamma_0(q)}[f(t)], \phi \rangle_{L^2(\Gamma_0(q) \backslash {\mathbf H})}

and then using the Cauchy-Schwarz inequality, we conclude that

\displaystyle  \int_{\Gamma_0(q) \backslash {\mathbf H}} | P_{\Gamma_\infty \backslash \Gamma_0(q)}[f(t)] |^2\ d\mu_{\Gamma_0(q) \backslash {\mathbf H}} \gg e^{2\delta t} \ \ \ \ \ (17)

as {t \rightarrow +\infty}, where we allow implied constants to depend on {f, q, \phi}.

We complement this lower bound with slightly crude upper bound in which we are willing to lose some powers of {t}. We have already seen that {f(t)} is supported in the strip {\{ z: e^{-t} \ll \hbox{Im} z \ll e^t \}}. Compactly supported solutions to (16) on the cylinder {\Gamma_\infty \backslash {\mathbf H}} conserve the energy

\displaystyle  \frac{1}{2} \int_{\Gamma_\infty \backslash {\mathbf H}} |\partial_t f|^2 + y^2(|\partial_x f|^2 + |\partial_y f|^2 - \frac{1}{4} |f|^2) \frac{ dx dy}{y^2}.

In particular, this quantity is {O(1)} for all time (recall we allow implied constants to depend on {f,q,\phi}). From Hardy’s inequality, the quantity

\displaystyle  \int_{\Gamma_\infty \backslash {\mathbf H}} |\partial_y f|^2 - \frac{1}{4} |f|^2\ dx dy

is non-negative. Discarding this term and using {\partial_x f = 2\pi i my}, and using the fact that {m} is non-zero, we arrive at the bounds

\displaystyle  \int_{\Gamma_\infty \backslash {\mathbf H}} |f|^2\ dx dy \ll 1


\displaystyle  \int_{\Gamma_\infty \backslash {\mathbf H}} |\partial_t f|^2\ \frac{dx dy}{y^2} \ll 1.

(We allow implied constants to depend on {m,f,q,\phi}, but not on {t}.) From the fundamental theorem of calculus and Minkowski’s inequality in {L^2}, the latter inequality implies that

\displaystyle  \int_{\Gamma_\infty \backslash {\mathbf H}} |f|^2\ \frac{dx dy}{y^2} \ll t^2

for {t \geq 1}, which on combining with the former inequality gives

\displaystyle  \int_{\Gamma_\infty \backslash {\mathbf H}} |f|^2 (1 + y^{-2})\ dx dy \ll t^2.

The function {\Delta f} also obeys the wave equation (16), so a similar argument gives

\displaystyle  \int_{\Gamma_\infty \backslash {\mathbf H}} |\Delta f|^2 (1+y^{-2})\ dx dy \ll t^2.

Applying a Sobolev inequality on unit squares (for {y \geq 1}) or on squares of length comparable to {y} (for {y < 1}) we conclude the pointwise estimates

\displaystyle  |f(t,x+iy)| \ll t

for {y \geq 1} and

\displaystyle  |f(t,x+iy)| \ll t y^{1/2}

for {y < 1}. In particular, we write {f(t,x+iy) = F(t,my) e(m,x)}, we have the somewhat crude estimates

\displaystyle  |F(t,y)| \ll t^{O(1)} y^{1/2}

for all {y > 0} and {t \geq 1}. (One can do better than this, particularly for large {y}, but this bound will suffice for us.)

By repeating the analysis of (13) at the start of this post, we see that the quantity

\displaystyle  \int_{\Gamma_0(q) \backslash {\mathbf H}} | P_{\Gamma_\infty \backslash \Gamma_0(q)}[f(t)] |^2\ d\mu_{\Gamma_0(q) \backslash {\mathbf H}} \ \ \ \ \ (18)

can be expressed as

\displaystyle  m (\int_0^\infty |F(t,y)|^2 \frac{dy}{y^2} + \sum_{q|c} \frac{S(m,m;c)}{c} V(t,\frac{m}{c}))


\displaystyle  V(t,u) := \frac{1}{u} \int_0^\infty \int_{\bf R} F(t,y) \bar{F}(t,u^2 \frac{y}{x^2 + y^2}) e(- x - u^2 \frac{x}{x^2+y^2} )\ \frac{dx dy}{y^2}.

Since {F(t,y)} is supported on {\{ e^{-t} \ll y \ll e^t \}} and is bounded by {O( t^{O(1)} y^{1/2} )}, the integral {\int_0^\infty |F(t,y)|^2 \frac{dy}{y^2}} is {O(t^{O(1)})} for {t \geq 1}. We also see that {V(t,u)} vanishes unless {u \gg e^{-t}} (otherwise {y} and {u^2 \frac{y}{x^2+y^2} \geq \frac{u^2}{y}} cannot simultaneously be {\gg e^{-t}}, and for such values of {u}, we have the triangle inequality bound

\displaystyle  V(t,u) \ll \frac{t^{O(1)}}{u} \int_{e^{-t} \ll y \ll e^t} \int_{x \ll e^t u} y^{1/2} (u^2 \frac{y}{x^2+y^2})^{1/2}\ \frac{dx dy}{y^2}.

Evaluating the {x} integral and then the {y} integral, we arrive at

\displaystyle  V(t,u) \ll t^{O(1)} \log(2+u)

and so we can bound (18) (ignoring any potential cancellation in {c}) by

\displaystyle  \ll t^{O(1)} ( 1 + \sum_{c \ll e^t} \frac{|S(m,m;c)|}{c} ).

Now we use the Weil bound for Kloosterman sums, which gives

\displaystyle  |S(m,m;c)| \ll c^{1/2 + o(1)}

(see e.g. this previous post for a discussion of this bound) to arrive at the bound

\displaystyle  \int_{\Gamma_0(q) \backslash {\mathbf H}} | P_{\Gamma_\infty \backslash \Gamma_0(q)}[f(t)] |^2\ d\mu_{\Gamma_0(q) \backslash {\mathbf H}} \ll t^{O(1)} e^{t/2}

as {t \rightarrow \infty}. Comparing this with (17) we obtain a contradiction as {t \rightarrow \infty} since we have {\delta > \frac{1}{4}}, and the claim follows.

Remark 2 It was conjectured by Linnik that

\displaystyle  \sum_{c \leq x} \frac{S(m,n;c)}{c} \ll x^{o(1)}

as {x \rightarrow \infty} for any fixed {m,n}; this, when combined with a more refined analysis of the above type, implies the Selberg eigenvalue conjecture that all eigenvalues of {-\Delta} on {L^2(\Gamma_0(q) \backslash {\mathbf H})_0} are at least {1/4}.

— 2. Consequences of Selberg’s conjecture —

In the previous section we saw how bounds on Kloosterman sums gave rise to lower bounds on eigenvalues of the Laplacian. It turns out that this implication is reversible. The simplest case (at least from the perspective of wave equation methods) is when Selberg’s eigenvalue conjecture is true, so that the Laplacian on {L^2(\Gamma_0(q) \backslash {\mathbf H})_0} has spectrum in {[\frac{1}{4},\infty)}. Equivalently, one has the inequality

\displaystyle  \langle -\Delta f, f \rangle_{L^2(\Gamma_0(q) \backslash {\mathbf H})_0} \geq \frac{1}{4} \langle f, f \rangle_{L^2(\Gamma_0(q) \backslash {\mathbf H})_0}

for all {f \in L^2(\Gamma_0(q) \backslash {\mathbf H})_0} (interpreting derivatives in a distributional sense if necessary). Integrating by parts, this shows that

\displaystyle  \int_{\Gamma_0(q) \backslash {\mathbf H}} |\nabla f|^2 - \frac{1}{4} |f|^2\ d\mu \geq 0 \ \ \ \ \ (19)

for all {f \in L^2(\Gamma_0(q) \backslash {\mathbf H})_0}, where the gradient {\nabla f} and its magnitude {|\nabla f|} are computed using the Riemannian metric in {\Gamma_0(q) \backslash {\mathbf H}}.

Now suppose one has a smooth, compactly supported in space solution {f: {\bf R} \times \Gamma_0(q) \backslash {\mathbf H} \rightarrow {\bf R}} to the inhomogeneous wave equation

\displaystyle  \partial_{tt} f - \Delta f - \frac{1}{4} f = g

for some forcing term {g: {\bf R} \times \Gamma_0(q) \backslash {\mathbf H} \rightarrow {\bf R}} which is also smooth and compactly supported in space. We assume that {f(t)} has mean zero for all {t}. Introducing the energy

\displaystyle  E[f(t)] := \frac{1}{2}\int_{\Gamma_0(q) \backslash {\mathbf H}} |\partial_t f|^2 + |\nabla f|^2 - \frac{1}{4} |f|^2\ d\mu,

which is non-negative thanks to (19) and integrating by parts, we obtain the energy identity

\displaystyle  \partial_t E[f(t)] = \int_{\Gamma_0(q) \backslash {\mathbf H}} (\partial_t f) g\ d\mu

and hence by Cauchy-Schwarz

\displaystyle  \partial_t E[f(t)] \ll E[f(t)]^{1/2} (\int_{\Gamma_0(q) \backslash {\mathbf H}} |g|^2\ d\mu)^{1/2}

and hence

\displaystyle  \partial_t E[f(t)]^{1/2} \ll (\int_{\Gamma_0(q) \backslash {\mathbf H}} |g|^2\ d\mu)^{1/2}

(in a distributional sense at least), giving rise to the energy inequality

\displaystyle  E[f(t)]^{1/2} \ll E[f(0)]^{1/2} + \int_0^t (\int_{\Gamma_0(q) \backslash {\mathbf H}} |g(t')|^2\ d\mu)^{1/2}\ dt'.

We can lift this inequality to the cylinder {\Gamma_\infty \backslash {\mathbf H}}, concluding that for any smooth, compactly supported in space solution {f:{\bf R} \times (\Gamma_\infty \backslash {\mathbf H}) \rightarrow {\bf R}} to the inhomogeneous equation

\displaystyle  \partial_{tt} f - \Delta f - \frac{1}{4} f = g \ \ \ \ \ (20)

for some forcing term {g: {\bf R} \times \Gamma_\infty \backslash {\mathbf H} \rightarrow {\bf R}} which is also smooth and compactly supported in space, with {f} mean zero for all time, we have the energy inequality

\displaystyle  E[P_{\Gamma_\infty \backslash \Gamma_0(q)}[f(t)]]^{1/2} \ll E[P_{\Gamma_\infty \backslash \Gamma_0(q)}[f(0)]]^{1/2}

\displaystyle + \int_0^t (\int_{\Gamma_0(q) \backslash {\mathbf H}} |P_{\Gamma_\infty \backslash \Gamma_0(q)}[g(t')]|^2\ d\mu)^{1/2}\ dt'.

One can use this inequality to analyse the {L^2} norm of Poincaré series by testing on various functions {f} (and working out {g} using (20)). Suppose for instance that {m} is a fixed natural number, and {\psi: {\bf R} \rightarrow {\bf R}} is a smooth compactly supported function. We consider the traveling wave {f:{\bf R} \times \Gamma_\infty \backslash {\mathbf H} \rightarrow {\bf R}} given by the formula

\displaystyle  f(t,x+iy) := y^{1/2} \Psi( - \log y ) \Psi( t + \log y ) e(mx)

where {\Psi(u) = \int_{-\infty}^u \psi(v)\ dv} is the primitive of {\psi}; the point is that this is an approximate solution to the homogeneous wave equation, particularly at small values of {y}. Clearly {f(t)} is compactly supported with mean zero for {t \geq 0}, in the region {\{ x+iy: e^{-t} \ll y \ll 1 \}} (we allow implied constants to depend on {\psi,m} but not on {q}). In the region {y \sim 1}, {f} and its first derivatives are {O(1)}, giving a contribution of {O(1)} to the energy {E[P_{\Gamma_\infty \backslash \Gamma_0(q)}[f(t)]]} (note that the shifts of the region {\{ 0 \leq x \leq 1; y \sim 1\}} by {\hbox{PSL}_2({\bf Z})} have bounded overlap). In particular we have

\displaystyle  E[P_{\Gamma_\infty \backslash \Gamma_0(q)}[f(0)]] \ll 1

and thus by the energy inequality (using only the {|\partial_t f|^2} portion of the energy)

\displaystyle  \int_{\Gamma_0(q) \backslash {\mathbf H}} |P_{\Gamma_\infty \backslash \Gamma_0(q)}[\partial_t f]|^2 \ll 1 + \int_0^t (\int_{\Gamma_0(q) \backslash {\mathbf H}} |P_{\Gamma_\infty \backslash \Gamma_0(q)}[g(t')]|^2\ d\mu)^{1/2}\ dt'

for {t \geq 0}, where

\displaystyle  g := \partial_{tt} f - \Delta f - \frac{1}{4} f.

Clearly {g} is supported on the region {\{ x+iy: e^{-t} \ll y \ll 1 \}}. For {y = O(1)}, one can compute that {g=O(1)}, giving a contribution of {O(t)} to the right-hand side. When {y} is much less than {1} but much larger than {e^{-t}}, we have {f(x+iy) = y^{1/2} e(mx)}, which after some calculation yields {g(x+iy) = 4\pi^2 m^2 y^{5/2} e(mx)}. As this decays so quickly as {y \rightarrow 0}, one can compute (using for instance the expansion (14) of (13) and crude estimates, ignoring all cancellation) that this contributes a total of {O(t)} to the right-hand side also. Finally one has to deal with the region {y \sim e^{-t}}, but {y} is much less than {1}. Here, {\partial_t f} is equal to {y^{1/2} \psi( t + \log y ) e(mx)}, and {f} is equal to {y^{1/2} \Psi(t + \log y) e(mx)}, which after some computation makes {g} equal to {4\pi^2 m^2 y^{5/2} \Psi(t + \log y) e(mx)}. Again, one can compute the contribution of this term to the energy inequality to be {O(t)}. We conclude that

\displaystyle  \int_{\Gamma_0(q) \backslash {\mathbf H}} |P_{\Gamma_\infty \backslash \Gamma_0(q)}[y^{1/2} \psi(t+\log y) e(mx)]|^2 \ll 1+t.

Applying the expansion (14) of (13), we conclude that

\displaystyle  \sum_{q|c} \frac{S(m,m;c)}{c} V(\frac{m}{c})) \ll 1+t \ \ \ \ \ (21)


\displaystyle  V(u) := \int_0^\infty \int_{\bf R} \frac{y}{\sqrt{x^2+y^2}} \psi(t + \log y) \psi(t + \log u^2 \frac{y}{x^2 + y^2})

\displaystyle e(- x - u^2 \frac{x}{x^2+y^2} )\ \frac{dx dy}{y^2}

The expression {V(u)} is only non-zero when {u \gg e^{-t}}, and the integrand is only non-zero when {y \sim e^{-t}} and {x \sim u}, which makes the phase {-x - u^2 \frac{x}{x^2+y^2}} of size {O(u)}. For {u} much smaller than {1}, the phase is thus largely irrelevant and the quantity {V(u)} is roughly comparable to {1} for {u \gg e^{-t}}. As such, the bound (21) can be viewed as a smoothed out version of the estimate

\displaystyle  \sum_{q|c: c \ll e^t} \frac{S(m,m;c)}{c} \ll 1+t

which is basically Linnik’s conjecture, mentioned in Remark 2. One can make this connection between Selberg’s eigenvalue conjecture and Linnik’s conjecture more precise: see Section 16.6 of Iwaniec and Kowalski, which goes through modified Bessel functions rather than through wave equation methods.

— 3. Pointwise bounds on Poincaré series —

The formula (14) for (13) allows one to compute {L^2} norms of Poincaré series. By using Sobolev embedding, one can then obtain pointwise control on such Poincaré series, as long as one stays away from the cusps. For instance, suppose we are interested in evaluating a Poincaré series {P_{\Gamma_\infty \backslash \Gamma_0(q)}[f]} at a point of the form {z = \gamma i} for some {\gamma \in \hbox{PSL}_2({\bf Z})}. From the Sobolev inequality we have

\displaystyle  |f(i)|^2 \ll \int_{B(i,1)} |f|^2 + |\Delta f|^2\ d\mu_{\mathbf H}

for any smooth function {f}, and thus by translation

\displaystyle  |f(\gamma i)|^2 \ll \int_{B(\gamma i,1)} |f|^2 + |\Delta f|^2\ d\mu_{\mathbf H}.

The ball {B(i,1)} meets only boundedly many translates of the standard fundamental domain of {\hbox{PSL}_2({\bf Z}) \backslash {\mathbf H}}, and hence {B(\gamma i,1)} does too. Since {\Gamma_0(q)} is a subgroup of {\hbox{PSL}_2({\bf Z})}, we conclude that {B(\gamma i,1)} meets only boundedly many translates of a fundamental domain for {\Gamma_0(q) \backslash {\mathbf H}}. In particular, we obtain the Sobolev inequality

\displaystyle  |f(\gamma i)|^2 \ll \int_{\Gamma_0(q) \backslash {\mathbf H}} |f|^2 + |\Delta f|^2\ d\mu_{\Gamma_0(q) \backslash \mathbf H} \ \ \ \ \ (22)

for any smooth {\Gamma_0(q)}-automorphic function {f}. This estimate is unfortunately a little inefficient when {q} is large, since the ball {B(\gamma i,1)} has area comparable to one, whereas the quotient space {\Gamma_0(q) \backslash {\mathbf H}} has area roughly comparable to {q}, so that one is conceding quite a bit by replacing the ball by the quotient space. Nevertheless this estimate is still useful enough to give some good results. We illustrate this by proving the estimate

\displaystyle  \sum_{n: q|n} \sum_{\nu \in {\bf Z}/n{\bf Z}: \nu^2 + 1 = 0 \hbox{ mod } n} e( \frac{m \nu}{n} ) g( \frac{n}{X} ) \ll X^{1/2+o(1)} + q^{-1/2} X^{3/4+o(1)} \ \ \ \ \ (23)

for {1 \leq q,m \leq X} with {m} coprime to {q}, where {g: {\bf R} \rightarrow {\bf R}} is a fixed smooth function supported in, say, {[1,2]} (and implied constants are allowed to depend on {g}), and the asymptotic notation is with regard to the limit {x \rightarrow \infty}. This type of estimate (appearing for instance (in a stronger form) in this paper of Duke, Friedlander, and Iwaniec; see also Proposition 21.10 of Iwaniec and Kowalski.) establishes some equidistribution of the square roots {\{\nu \in {\bf Z}/n{\bf Z}: \nu^2 + 1 = 0 \}} as {n} varies (while staying comparable to {x}). For comparison, crude estimates (ignoring the cancellation in the phase {e(\frac{h \nu}{n})}) give a bound of {O( x^{1+o(1)} / q )}, so the bound (23) is non-trivial whenever {q} is significantly smaller than {x^{1/2}}. Estimates such as (23) are also useful for getting good error terms in the asymptotics for the expression (10), as was first done by Hooley.

One can write (23) in terms of Poincaré series much as was done for (10). Using the fact that the discriminant {-1} has class number one as before, we see that for every positive {n} and {\nu \in {\bf Z}/n{\bf Z}} with {\nu^2 + 1 = 0 \hbox{ mod } n}, we can find an element {\gamma = \begin{pmatrix} a & b \\ c & d \end{pmatrix}} of {\hbox{PSL}_2({\bf Z})} such that {\gamma i = \frac{ai+b}{ci+d}} has imaginary part {\frac{1}{n}} and real part {\frac{\nu}{n}} modulo one (thus, {n = c^2+d^2} and {\nu = ac + bd \hbox{ mod } n}); this element {\gamma} is unique up to left translation by {\Gamma_\infty}. We can thus write the left-hand side of (23) as

\displaystyle  \sum_{\gamma \in \hbox{Fund}( \Gamma_\infty \backslash \hbox{PSL}_2({\bf Z})): q|c^2+d^2} F( \gamma i )


\displaystyle  F(x+iy) = e( mx ) g( \frac{1}{Xy} )

and {c,d} are the bottom two entries of the matrix {\gamma} (determined up to sign). The condition {q | c^2+d^2} implies (since {c,d} must be coprime) that {c,d} are coprime to {q} with {c/d = \delta \hbox{ mod } q} for some {\delta} with {\delta^2+1=0 \hbox{ mod } q}; conversely, if {c,d} obey such a condition then {q|c^2+d^2}. The number of such {\delta} is at most {X^{o(1)}}. Thus it suffices to show that

\displaystyle  \sum_{\gamma \in \hbox{Fund}( \Gamma_\infty \backslash \hbox{PSL}_2({\bf Z})): c/d = \delta \hbox{ mod } q} F( \gamma i ) \ll X^{1/2+o(1)} + q^{-1/2} X^{3/4+o(1)}

for each such {\delta}.

The constraint {c/d = \delta \hbox{ mod } q} constrains {\gamma} to a single right coset of {\Gamma_0(q)}. Thus the left-hand side can be written as

\displaystyle  \sum_{\gamma' \in \hbox{Fund}( \Gamma_\infty \backslash \Gamma_0(q))} F( \gamma' \gamma i )

which is just {P_{\Gamma_\infty \backslash \Gamma_0(q)}[F](\gamma i)}. Applying (22) (and interchanging the Poincaré series and the Laplacian), it thus suffices to show that

\displaystyle  \int_{\Gamma_0(q) \backslash {\mathbf H}} |P_{\Gamma_\infty \backslash \Gamma_0(q)}[F]|^2\ d\mu_{\Gamma_0(q) \backslash {\mathbf H}} \ll X^{1+o(1)} + q^{-1} X^{3/2+o(1)} \ \ \ \ \ (24)


\displaystyle  \int_{\Gamma_0(q) \backslash {\mathbf H}} |P_{\Gamma_\infty \backslash \Gamma_0(q)}[\Delta F]|^2\ d\mu_{\Gamma_0(q) \backslash {\mathbf H}} \ll X^{1+o(1)} + q^{-1} X^{3/2+o(1)}. \ \ \ \ \ (25)

We can compute

\displaystyle  \Delta F(x+iy) = e(mx) \tilde g( \frac{1}{Xy} )


\displaystyle  \tilde g(u) = 2 u g'(u) + u^2 g''(u) - \frac{4\pi^2 m^2}{X^2} u^{-2} g(u).

By hypothesis, the coefficient {\frac{4\pi^2 m^2}{X^2}} is bounded, and so {\tilde g} has all derivatives bounded while remaining supported in {[1,2]}. Because of this, the arguments used to establish (24) can be adapted without difficulty to establish (25).

Using the expansion (14) of (13), we can write the left-hand side of (24) as

\displaystyle  m (\int_0^\infty |\tilde g( \frac{m}{Xy})|^2 \frac{dy}{y^2} + \sum_{q|c} \frac{S(m,m;c)}{c} V(\frac{m}{c}))


\displaystyle  V(u) := \frac{1}{u} \int_0^\infty \int_{\bf R} \tilde g( \frac{m}{Xy} ) \tilde g(\frac{m (x^2+y^2)}{X u^2 y}) e(- x - u^2 \frac{x}{x^2+y^2} )\ \frac{dx dy}{y^2}.

The first term can be computed to give a contribution of {O(X)}, so it suffices to show that

\displaystyle  \sum_{q|c} \frac{S(m,m;c)}{c} V(\frac{m}{c}) \ll m^{-1} q^{-1} X^{3/2+o(1)}. \ \ \ \ \ (26)

The quantity {V(u)} is vanishing unless {u \gg m/X}. In that case, the integrand vanishes unless {y \sim m/X} and {x = O(u)}, so by the triangle inequality we have {V(u) \ll X/m}. So the left-hand side of (26) is bounded by

\displaystyle  \ll \frac{X}{m} \sum_{q|c: c \ll X} \frac{|S(m,m;c)|}{c}.

By the Weil bound for Kloosterman sums, we have {|S(m,m;c)| \ll X^{o(1)} (m,c)^{1/2} c^{1/2}}, so on factoring out {q} from {c} we can bound the previous expression by

\displaystyle  \ll \frac{X^{1+o(1)}}{m} q^{-1/2} \sum_{c: c \ll X/q} (m,c)^{1/2} c^{-1/2}

\displaystyle  \ll \frac{X^{1+o(1)}}{m} q^{-1/2} \sum_{d|m} \sum_{c: c \ll X/q; d|c} d^{1/2} c^{-1/2}

\displaystyle  \ll \frac{X^{1+o(1)}}{m} q^{-1/2} \sum_{d|m} d^{1/2} (\frac{X}{qd})^{1/2}

\displaystyle  \ll \frac{X^{1+o(1)}}{m} q^{-1/2} X^{o(1)} \frac{X^{1/2}}{q^{1/2}}

and the claim follows.

Remark 3 By using improvements to Selberg’s 3/16 theorem (such as the result of Kim and Sarnak improving this fraction to {\frac{975}{4096}}) one can improve the second term in the right-hand side of (23) slightly.