You are currently browsing the category archive for the ‘math.CV’ category.

An extremely large portion of mathematics is concerned with locating solutions to equations such as

for in some suitable domain space (either finite-dimensional or infinite-dimensional), and various maps or . To solve the fixed point iteration equation (1), the simplest general method available is the fixed point iteration method: one starts with an initial *approximate solution* to (1), so that , and then recursively constructs the sequence by . If behaves enough like a “contraction”, and the domain is complete, then one can expect the to converge to a limit , which should then be a solution to (1). For instance, if is a map from a metric space to itself, which is a contraction in the sense that

for all and some , then with as above we have

for any , and so the distances between successive elements of the sequence decay at at least a geometric rate. This leads to the contraction mapping theorem, which has many important consequences, such as the inverse function theorem and the Picard existence theorem.

A slightly more complicated instance of this strategy arises when trying to *linearise* a complex map defined in a neighbourhood of a fixed point. For simplicity we normalise the fixed point to be the origin, thus and . When studying the complex dynamics , , of such a map, it can be useful to try to conjugate to another function , where is a holomorphic function defined and invertible near with , since the dynamics of will be conjguate to that of . Note that if and , then from the chain rule any conjugate of will also have and . Thus, the “simplest” function one can hope to conjugate to is the linear function . Let us say that is *linearisable* (around ) if it is conjugate to in some neighbourhood of . Equivalently, is linearisable if there is a solution to the Schröder equation

for some defined and invertible in a neighbourhood of with , and all sufficiently close to . (The Schröder equation is normalised somewhat differently in the literature, but this form is equivalent to the usual form, at least when is non-zero.) Note that if solves the above equation, then so does for any non-zero , so we may normalise in addition to , which also ensures local invertibility from the inverse function theorem. (Note from winding number considerations that cannot be invertible near zero if vanishes.)

We have the following basic result of Koenigs:

Theorem 1 (Koenig’s linearisation theorem)Let be a holomorphic function defined near with and . If (attracting case) or (repelling case), then is linearisable near zero.

*Proof:* Observe that if solve (2), then solve (2) also (in a sufficiently small neighbourhood of zero). Thus we may reduce to the attractive case .

Let be a sufficiently small radius, and let denote the space of holomorphic functions on the complex disk with and . We can view the Schröder equation (2) as a fixed point equation

where is the partially defined function on that maps a function to the function defined by

assuming that is well-defined on the range of (this is why is only partially defined).

We can solve this equation by the fixed point iteration method, if is small enough. Namely, we start with being the identity map, and set , etc. We equip with the uniform metric . Observe that if , and is small enough, then takes values in , and are well-defined and lie in . Also, since is smooth and has derivative at , we have

if , and is sufficiently small depending on . This is not yet enough to establish the required contraction (thanks to Mario Bonk for pointing this out); but observe that the function is holomorphic on and bounded by on the boundary of this ball (or slightly within this boundary), so by the maximum principle we see that

on all of , and in particular

on . Putting all this together, we see that

since , we thus obtain a contraction on the ball if is small enough (and sufficiently small depending on ). From this (and the completeness of , which follows from Morera’s theorem) we see that the iteration converges (exponentially fast) to a limit which is a fixed point of , and thus solves Schröder’s equation, as required.

Koenig’s linearisation theorem leaves open the *indifferent case* when . In the *rationally indifferent* case when for some natural number , there is an obvious obstruction to linearisability, namely that (in particular, linearisation is not possible in this case when is a non-trivial rational function). An obstruction is also present in some *irrationally indifferent* cases (where but for any natural number ), if is sufficiently close to various roots of unity; the first result of this form is due to Cremer, and the optimal result of this type for quadratic maps was established by Yoccoz. In the other direction, we have the following result of Siegel:

Theorem 2 (Siegel’s linearisation theorem)Let be a holomorphic function defined near with and . If and one has the Diophantine condition for all natural numbers and some constant , then is linearisable at .

The Diophantine condition can be relaxed to a more general condition involving the rational exponents of the phase of ; this was worked out by Brjuno, with the condition matching the one later obtained by Yoccoz. Amusingly, while the set of Diophantine numbers (and hence the set of linearisable ) has full measure on the unit circle, the set of non-linearisable is generic (the complement of countably many nowhere dense sets) due to the above-mentioned work of Cremer, leading to a striking disparity between the measure-theoretic and category notions of “largeness”.

Siegel’s theorem does not seem to be provable using a fixed point iteration method. However, it can be established by modifying another basic method to solve equations, namely Newton’s method. Let us first review how this method works to solve the equation for some smooth function defined on an interval . We suppose we have some initial approximant to this equation, with small but not necessarily zero. To make the analysis more quantitative, let us suppose that the interval lies in for some , and we have the estimates

for some and and all (the factors of are present to make “dimensionless”).

Lemma 3Under the above hypotheses, we can find with such thatIn particular, setting , , and , we have , and

for all .

The crucial point here is that the new error is roughly the square of the previous error . This leads to extremely fast (double-exponential) improvement in the error upon iteration, which is more than enough to absorb the exponential losses coming from the factor.

*Proof:* If for some absolute constants then we may simply take , so we may assume that for some small and large . Using the Newton approximation we are led to the choice

for . From the hypotheses on and the smallness hypothesis on we certainly have . From Taylor’s theorem with remainder we have

and the claim follows.

We can iterate this procedure; starting with as above, we obtain a sequence of nested intervals with , and with evolving by the recursive equations and estimates

If is sufficiently small depending on , we see that converges rapidly to zero (indeed, we can inductively obtain a bound of the form for some large absolute constant if is small enough), and converges to a limit which then solves the equation by the continuity of .

As I recently learned from Zhiqiang Li, a similar scheme works to prove Siegel’s theorem, as can be found for instance in this text of Carleson and Gamelin. The key is the following analogue of Lemma 3.

Lemma 4Let be a complex number with and for all natural numbers . Let , and let be a holomorphic function with , , andfor all and some . Let , and set . Then there exists an injective holomorphic function and a holomorphic function such that

and

for all and some .

*Proof:* By scaling we may normalise . If for some constants , then we can simply take to be the identity and , so we may assume that for some small and large .

To motivate the choice of , we write and , with and viewed as small. We would like to have , which expands as

As and are both small, we can heuristically approximate up to quadratic errors (compare with the Newton approximation ), and arrive at the equation

This equation can be solved by Taylor series; the function vanishes to second order at the origin and thus has a Taylor expansion

and then has a Taylor expansion

We take this as our definition of , define , and then define implicitly via (4).

Let us now justify that this choice works. By (3) and the generalised Cauchy integral formula, we have for all ; by the Diophantine assumption on , we thus have . In particular, converges on , and on the disk (say) we have the bounds

In particular, as is so small, we see that maps injectively to and to , and the inverse maps to . From (3) we see that maps to , and so if we set to be the function , then is a holomorphic function obeying (4). Expanding (4) in terms of and as before, and also writing , we have

for , which by (5) simplifies to

From (6), the fundamental theorem of calculus, and the smallness of we have

and thus

From (3) and the Cauchy integral formula we have on (say) , and so from (6) and the fundamental theorem of calculus we conclude that

on , and the claim follows.

If we set , , and to be sufficiently small, then (since vanishes to second order at the origin), the hypotheses of this lemma will be obeyed for some sufficiently small . Iterating the lemma (and halving repeatedly), we can then find sequences , injective holomorphic functions and holomorphic functions such that one has the recursive identities and estimates

for all and . By construction, decreases to a positive radius that is a constant multiple of , while (for small enough) converges double-exponentially to zero, so in particular converges uniformly to on . Also, is close enough to the identity, the compositions are uniformly convergent on with and . From this we have

on , and on taking limits using Morera’s theorem we obtain a holomorphic function defined near with , , and

obtaining the required linearisation.

Remark 5The idea of using a Newton-type method to obtain error terms that decay double-exponentially, and can therefore absorb exponential losses in the iteration, also occurs in KAM theory and in Nash-Moser iteration, presumably due to Siegel’s influence on Moser. (I discuss Nash-Moser iteration in this note that I wrote back in 2006.)

In Notes 2, the Riemann zeta function (and more generally, the Dirichlet -functions ) were extended meromorphically into the region in and to the right of the critical strip. This is a sufficient amount of meromorphic continuation for many applications in analytic number theory, such as establishing the prime number theorem and its variants. The zeroes of the zeta function in the critical strip are known as the *non-trivial zeroes* of , and thanks to the truncated explicit formulae developed in Notes 2, they control the asymptotic distribution of the primes (up to small errors).

The function obeys the trivial functional equation

for all in its domain of definition. Indeed, as is real-valued when is real, the function vanishes on the real line and is also meromorphic, and hence vanishes everywhere. Similarly one has the functional equation

From these equations we see that the zeroes of the zeta function are symmetric across the real axis, and the zeroes of are the reflection of the zeroes of across this axis.

It is a remarkable fact that these functions obey an additional, and more non-trivial, functional equation, this time establishing a symmetry across the *critical line* rather than the real axis. One consequence of this symmetry is that the zeta function and -functions may be extended meromorphically to the entire complex plane. For the zeta function, the functional equation was discovered by Riemann, and reads as follows:

Theorem 1 (Functional equation for the Riemann zeta function)The Riemann zeta function extends meromorphically to the entire complex plane, with a simple pole at and no other poles. Furthermore, one has the functional equation

for all complex other than , where is the function

Here , are the complex-analytic extensions of the classical trigionometric functions , and is the Gamma function, whose definition and properties we review below the fold.

The functional equation can be placed in a more symmetric form as follows:

Corollary 2 (Functional equation for the Riemann xi function)The Riemann xi function

is analytic on the entire complex plane (after removing all removable singularities), and obeys the functional equations

In particular, the zeroes of consist precisely of the non-trivial zeroes of , and are symmetric about both the real axis and the critical line. Also, is real-valued on the critical line and on the real axis.

Corollary 2 is an easy consequence of Theorem 1 together with the duplication theorem for the Gamma function, and the fact that has no zeroes to the right of the critical strip, and is left as an exercise to the reader (Exercise 19). The functional equation in Theorem 1 has many proofs, but most of them are related in on way or another to the Poisson summation formula

(Theorem 34 from Supplement 2, at least in the case when is twice continuously differentiable and compactly supported), which can be viewed as a Fourier-analytic link between the coarse-scale distribution of the integers and the fine-scale distribution of the integers. Indeed, there is a quick heuristic proof of the functional equation that comes from formally applying the Poisson summation formula to the function , and noting that the functions and are formally Fourier transforms of each other, up to some Gamma function factors, as well as some trigonometric factors arising from the distinction between the real line and the half-line. Such a heuristic proof can indeed be made rigorous, and we do so below the fold, while also providing Riemann’s two classical proofs of the functional equation.

From the functional equation (and the poles of the Gamma function), one can see that has *trivial zeroes* at the negative even integers , in addition to the non-trivial zeroes in the critical strip. More generally, the following table summarises the zeroes and poles of the various special functions appearing in the functional equation, after they have been meromorphically extended to the entire complex plane, and with zeroes classified as “non-trivial” or “trivial” depending on whether they lie in the critical strip or not. (Exponential functions such as or have no zeroes or poles, and will be ignored in this table; the zeroes and poles of rational functions such as are self-evident and will also not be displayed here.)

Function | Non-trivial zeroes | Trivial zeroes | Poles |

Yes | |||

Yes | |||

No | Even integers | No | |

No | Odd integers | No | |

No | Integers | No | |

No | No | ||

No | No | ||

No | No | ||

No | No | ||

Yes | No | No |

Among other things, this table indicates that the Gamma and trigonometric factors in the functional equation are tied to the trivial zeroes and poles of zeta, but have no direct bearing on the distribution of the non-trivial zeroes, which is the most important feature of the zeta function for the purposes of analytic number theory, beyond the fact that they are symmetric about the real axis and critical line. In particular, the Riemann hypothesis is not going to be resolved just from further analysis of the Gamma function!

The zeta function computes the “global” sum , with ranging all the way from to infinity. However, by some Fourier-analytic (or complex-analytic) manipulation, it is possible to use the zeta function to also control more “localised” sums, such as for some and some smooth compactly supported function . It turns out that the functional equation (3) for the zeta function localises to this context, giving an *approximate functional equation* which roughly speaking takes the form

whenever and ; see Theorem 38 below for a precise formulation of this equation. Unsurprisingly, this form of the functional equation is also very closely related to the Poisson summation formula (8), indeed it is essentially a special case of that formula (or more precisely, of the van der Corput -process). This useful identity relates long smoothed sums of to short smoothed sums of (or vice versa), and can thus be used to shorten exponential sums involving terms such as , which is useful when obtaining some of the more advanced estimates on the Riemann zeta function.

We will give two other basic uses of the functional equation. The first is to get a good count (as opposed to merely an upper bound) on the density of zeroes in the critical strip, establishing the Riemann-von Mangoldt formula that the number of zeroes of imaginary part between and is for large . The other is to obtain untruncated versions of the explicit formula from Notes 2, giving a remarkable exact formula for sums involving the von Mangoldt function in terms of zeroes of the Riemann zeta function. These results are not strictly necessary for most of the material in the rest of the course, but certainly help to clarify the nature of the Riemann zeta function and its relation to the primes.

In view of the material in previous notes, it should not be surprising that there are analogues of all of the above theory for Dirichlet -functions . We will restrict attention to primitive characters , since the -function for imprimitive characters merely differs from the -function of the associated primitive factor by a finite Euler product; indeed, if for some principal whose modulus is coprime to that of , then

(cf. equation (45) of Notes 2).

The main new feature is that the Poisson summation formula needs to be “twisted” by a Dirichlet character , and this boils down to the problem of understanding the finite (additive) Fourier transform of a Dirichlet character. This is achieved by the classical theory of Gauss sums, which we review below the fold. There is one new wrinkle; the value of plays a role in the functional equation. More precisely, we have

Theorem 3 (Functional equation for -functions)Let be a primitive character of modulus with . Then extends to an entire function on the complex plane, withor equivalently

for all , where is equal to in the even case and in the odd case , and

where is the Gauss sum

and , with the convention that the -periodic function is also (by abuse of notation) applied to in the cyclic group .

From this functional equation and (2) we see that, as with the Riemann zeta function, the non-trivial zeroes of (defined as the zeroes within the critical strip are symmetric around the critical line (and, if is real, are also symmetric around the real axis). In addition, acquires trivial zeroes at the negative even integers and at zero if , and at the negative odd integers if . For imprimitive , we see from (9) that also acquires some additional trivial zeroes on the left edge of the critical strip.

There is also a symmetric version of this equation, analogous to Corollary 2:

Corollary 4Let be as above, and setthen is entire with .

For further detail on the functional equation and its implications, I recommend the classic text of Titchmarsh or the text of Davenport.

In Notes 1, we approached multiplicative number theory (the study of multiplicative functions and their relatives) via elementary methods, in which attention was primarily focused on obtaining asymptotic control on summatory functions and logarithmic sums . Now we turn to the complex approach to multiplicative number theory, in which the focus is instead on obtaining various types of control on the Dirichlet series , defined (at least for of sufficiently large real part) by the formula

These series also made an appearance in the elementary approach to the subject, but only for real that were larger than . But now we will exploit the freedom to extend the variable to the complex domain; this gives enough freedom (in principle, at least) to recover control of elementary sums such as or from control on the Dirichlet series. Crucially, for many key functions of number-theoretic interest, the Dirichlet series can be analytically (or at least meromorphically) continued to the left of the line . The zeroes and poles of the resulting meromorphic continuations of (and of related functions) then turn out to control the asymptotic behaviour of the elementary sums of ; the more one knows about the former, the more one knows about the latter. In particular, knowledge of where the zeroes of the Riemann zeta function are located can give very precise information about the distribution of the primes, by means of a fundamental relationship known as the explicit formula. There are many ways of phrasing this explicit formula (both in exact and in approximate forms), but they are all trying to formalise an approximation to the von Mangoldt function (and hence to the primes) of the form

where the sum is over zeroes (counting multiplicity) of the Riemann zeta function (with the sum often restricted so that has large real part and bounded imaginary part), and the approximation is in a suitable weak sense, so that

for suitable “test functions” (which in practice are restricted to be fairly smooth and slowly varying, with the precise amount of restriction dependent on the amount of truncation in the sum over zeroes one wishes to take). Among other things, such approximations can be used to rigorously establish the prime number theorem

as , with the size of the error term closely tied to the location of the zeroes of the Riemann zeta function.

The explicit formula (1) (or any of its more rigorous forms) is closely tied to the counterpart approximation

for the Dirichlet series of the von Mangoldt function; note that (4) is formally the special case of (2) when . Such approximations come from the general theory of local factorisations of meromorphic functions, as discussed in Supplement 2; the passage from (4) to (2) is accomplished by such tools as the residue theorem and the Fourier inversion formula, which were also covered in Supplement 2. The relative ease of uncovering the Fourier-like duality between primes and zeroes (sometimes referred to poetically as the “music of the primes”) is one of the major advantages of the complex-analytic approach to multiplicative number theory; this important duality tends to be rather obscured in the other approaches to the subject, although it can still in principle be discernible with sufficient effort.

More generally, one has an explicit formula

for any (non-principal) Dirichlet character , where now ranges over the zeroes of the associated Dirichlet -function ; we view this formula as a “twist” of (1) by the Dirichlet character . The explicit formula (5), proven similarly (in any of its rigorous forms) to (1), is important in establishing the prime number theorem in arithmetic progressions, which asserts that

as , whenever is a fixed primitive residue class. Again, the size of the error term here is closely tied to the location of the zeroes of the Dirichlet -function, with particular importance given to whether there is a zero very close to (such a zero is known as an *exceptional zero* or Siegel zero).

While any information on the behaviour of zeta functions or -functions is in principle welcome for the purposes of analytic number theory, some regions of the complex plane are more important than others in this regard, due to the differing weights assigned to each zero in the explicit formula. Roughly speaking, in descending order of importance, the most crucial regions on which knowledge of these functions is useful are

- The region on or near the point .
- The region on or near the right edge of the
*critical strip*. - The right half of the critical strip.
- The region on or near the
*critical line*that bisects the critical strip. - Everywhere else.

For instance:

- We will shortly show that the Riemann zeta function has a simple pole at with residue , which is already sufficient to recover much of the classical theorems of Mertens discussed in the previous set of notes, as well as results on mean values of multiplicative functions such as the divisor function . For Dirichlet -functions, the behaviour is instead controlled by the quantity discussed in Notes 1, which is in turn closely tied to the existence and location of a Siegel zero.
- The zeta function is also known to have no zeroes on the right edge of the critical strip, which is sufficient to prove (and is in fact equivalent to) the prime number theorem. Any enlargement of the zero-free region for into the critical strip leads to improved error terms in that theorem, with larger zero-free regions leading to stronger error estimates. Similarly for -functions and the prime number theorem in arithmetic progressions.
- The (as yet unproven) Riemann hypothesis prohibits from having any zeroes within the right half of the critical strip, and gives very good control on the number of primes in intervals, even when the intervals are relatively short compared to the size of the entries. Even without assuming the Riemann hypothesis,
*zero density estimates*in this region are available that give some partial control of this form. Similarly for -functions, primes in short arithmetic progressions, and the generalised Riemann hypothesis. - Assuming the Riemann hypothesis, further distributional information about the zeroes on the critical line (such as Montgomery’s pair correlation conjecture, or the more general
*GUE hypothesis*) can give finer information about the error terms in the prime number theorem in short intervals, as well as other arithmetic information. Again, one has analogues for -functions and primes in short arithmetic progressions. - The functional equation of the zeta function describes the behaviour of to the left of the critical line, in terms of the behaviour to the right of the critical line. This is useful for building a “global” picture of the structure of the zeta function, and for improving a number of estimates about that function, but (in the absence of unproven conjectures such as the Riemann hypothesis or the pair correlation conjecture) it turns out that many of the basic analytic number theory results using the zeta function can be established without relying on this equation. Similarly for -functions.

Remark 1If one takes an “adelic” viewpoint, one can unite the Riemann zeta function and all of the -functions for various Dirichlet characters into a single object, viewing as a general multiplicative character on the adeles; thus the imaginary coordinate and the Dirichlet character are really the Archimedean and non-Archimedean components respectively of a single adelic frequency parameter. This viewpoint was famously developed in Tate’s thesis, which among other things helps to clarify the nature of the functional equation, as discussed in this previous post. We will not pursue the adelic viewpoint further in these notes, but it does supply a “high-level” explanation for why so much of the theory of the Riemann zeta function extends to the Dirichlet -functions. (The non-Archimedean character and the Archimedean character behave similarly from an algebraic point of view, but not so much from an analytic point of view; as such, the adelic viewpoint is well suited for algebraic tasks (such as establishing the functional equation), but not for analytic tasks (such as establishing a zero-free region).)

Roughly speaking, the elementary multiplicative number theory from Notes 1 corresponds to the information one can extract from the complex-analytic method in region 1 of the above hierarchy, while the more advanced elementary number theory used to prove the prime number theorem (and which we will not cover in full detail in these notes) corresponds to what one can extract from regions 1 and 2.

As a consequence of this hierarchy of importance, information about the function away from the critical strip, such as Euler’s identity

or equivalently

or the infamous identity

which is often presented (slightly misleadingly, if one’s conventions for divergent summation are not made explicit) as

are of relatively little direct importance in analytic prime number theory, although they are still of interest for some other, non-number-theoretic, applications. (The quantity does play a minor role as a normalising factor in some asymptotics, see e.g. Exercise 28 from Notes 1, but its precise value is usually not of major importance.) In contrast, the value of an -function at turns out to be extremely important in analytic number theory, with many results in this subject relying ultimately on a non-trivial lower-bound on this quantity coming from Siegel’s theorem, discussed below the fold.

For a more in-depth treatment of the topics in this set of notes, see Davenport’s “Multiplicative number theory“.

We will shortly turn to the complex-analytic approach to multiplicative number theory, which relies on the basic properties of complex analytic functions. In this supplement to the main notes, we quickly review the portions of complex analysis that we will be using in this course. We will not attempt a comprehensive review of this subject; for instance, we will completely neglect the conformal geometry or Riemann surface aspect of complex analysis, and we will also avoid using the various boundary convergence theorems for Taylor series or Dirichlet series (the latter type of result is traditionally utilised in multiplicative number theory, but I personally find them a little unintuitive to use, and will instead rely on a slightly different set of complex-analytic tools). We will also focus on the “local” structure of complex analytic functions, in particular adopting the philosophy that such functions behave locally like complex polynomials; the classical “global” theory of entire functions, while traditionally used in the theory of the Riemann zeta function, will be downplayed in these notes. On the other hand, we will play up the relationship between complex analysis and Fourier analysis, as we will incline to using the latter tool over the former in some of the subsequent material. (In the traditional approach to the subject, the Mellin transform is used in place of the Fourier transform, but we will not emphasise the role of the Mellin transform here.)

We begin by recalling the notion of a holomorphic function, which will later be shown to be essentially synonymous with that of a complex analytic function.

Definition 1 (Holomorphic function)Let be an open subset of , and let be a function. If , we say that iscomplex differentiableat if the limitexists, in which case we refer to as the (complex)

derivativeof at . If is differentiable at every point of , and the derivative is continuous, we say that isholomorphicon .

Exercise 2Show that a function is holomorphic if and only if the two-variable function is continuously differentiable on and obeys the Cauchy-Riemann equation

Basic examples of holomorphic functions include complex polynomials

as well as the complex exponential function

which are holomorphic on the entire complex plane (i.e., they are entire functions). The sum or product of two holomorphic functions is again holomorphic; the quotient of two holomorphic functions is holomorphic so long as the denominator is non-zero. Finally, the composition of two holomorphic functions is holomorphic wherever the composition is defined.

- (i) Establish Euler’s formula
for all . (

Hint:it is a bit tricky to do this starting from the trigonometric definitions of sine and cosine; I recommend either using the Taylor series formulations of these functions instead, or alternatively relying on the ordinary differential equations obeyed by sine and cosine.)- (ii) Show that every non-zero complex number has a complex logarithm such that , and that this logarithm is unique up to integer multiples of .
- (iii) Show that there exists a unique principal branch of the complex logarithm in the region , defined by requiring to be a logarithm of with imaginary part between and . Show that this principal branch is holomorphic with derivative .

In real analysis, we have the fundamental theorem of calculus, which asserts that

whenever is a real interval and is a continuously differentiable function. The complex analogue of this fact is that

whenever is a holomorphic function, and is a contour in , by which we mean a piecewise continuously differentiable function, and the contour integral for a continuous function is defined via change of variables as

The complex fundamental theorem of calculus (2) follows easily from the real fundamental theorem and the chain rule.

In real analysis, we have the rather trivial fact that the integral of a continuous function on a closed contour is always zero:

In complex analysis, the analogous fact is significantly more powerful, and is known as Cauchy’s theorem:

Theorem 4 (Cauchy’s theorem)Let be a holomorphic function in a simply connected open set , and let be a closed contour in (thus ). Then .

Exercise 5Use Stokes’ theorem to give a proof of Cauchy’s theorem.

A useful reformulation of Cauchy’s theorem is that of contour shifting: if is a holomorphic function on a open set , and are two contours in an open set with and , such that can be continuously deformed into , then . A basic application of contour shifting is the Cauchy integral formula:

Theorem 6 (Cauchy integral formula)Let be a holomorphic function in a simply connected open set , and let be a closed contour which is simple (thus does not traverse any point more than once, with the exception of the endpoint that is traversed twice), and which encloses a bounded region in the anticlockwise direction. Then for any , one has

*Proof:* Let be a sufficiently small quantity. By contour shifting, one can replace the contour by the sum (concatenation) of three contours: a contour from to , a contour traversing the circle once anticlockwise, and the reversal of the contour that goes from to . The contributions of the contours cancel each other, thus

By a change of variables, the right-hand side can be expanded as

Sending , we obtain the claim.

The Cauchy integral formula has many consequences. Specialising to the case when traverses a circle around , we conclude the mean value property

whenever is holomorphic in a neighbourhood of the disk . In a similar spirit, we have the maximum principle for holomorphic functions:

Lemma 7 (Maximum principle)Let be a simply connected open set, and let be a simple closed contour in enclosing a bounded region anti-clockwise. Let be a holomorphic function. If we have the bound for all on the contour , then we also have the bound for all .

*Proof:* We use an argument of Landau. Fix . From the Cauchy integral formula and the triangle inequality we have the bound

for some constant depending on and . This ostensibly looks like a weaker bound than what we want, but we can miraculously make the constant disappear by the “tensor power trick“. Namely, observe that if is a holomorphic function bounded in magnitude by on , and is a natural number, then is a holomorphic function bounded in magnitude by on . Applying the preceding argument with replaced by we conclude that

and hence

Sending , we obtain the claim.

Another basic application of the integral formula is

Corollary 8Every holomorphic function is complex analytic, thus it has a convergent Taylor series around every point in the domain. In particular, holomorphic functions are smooth, and the derivative of a holomorphic function is again holomorphic.

Conversely, it is easy to see that complex analytic functions are holomorphic. Thus, the terms “complex analytic” and “holomorphic” are synonymous, at least when working on open domains. (On a non-open set , saying that is analytic on is equivalent to asserting that extends to a holomorphic function of an open neighbourhood of .) This is in marked contrast to real analysis, in which a function can be continuously differentiable, or even smooth, without being real analytic.

*Proof:* By translation, we may suppose that . Let be a a contour traversing the circle that is contained in the domain , then by the Cauchy integral formula one has

for all in the disk . As is continuously differentiable (and hence continuous) on , it is bounded. From the geometric series formula

and dominated convergence, we conclude that

with the right-hand side an absolutely convergent series for , and the claim follows.

Exercise 9Establish the generalised Cauchy integral formulaefor any non-negative integer , where is the -fold complex derivative of .

This in turn leads to a converse to Cauchy’s theorem, known as Morera’s theorem:

Corollary 10 (Morera’s theorem)Let be a continuous function on an open set with the property that for all closed contours . Then is holomorphic.

*Proof:* We can of course assume to be non-empty and connected (hence path-connected). Fix a point , and define a “primitive” of by defining , with being any contour from to (this is well defined by hypothesis). By mimicking the proof of the real fundamental theorem of calculus, we see that is holomorphic with , and the claim now follows from Corollary 8.

An important consequence of Morera’s theorem for us is

Corollary 11 (Locally uniform limit of holomorphic functions is holomorphic)Let be holomorphic functions on an open set which converge locally uniformly to a function . Then is also holomorphic on .

*Proof:* By working locally we may assume that is a ball, and in particular simply connected. By Cauchy’s theorem, for all closed contours in . By local uniform convergence, this implies that for all such contours, and the claim then follows from Morera’s theorem.

Now we study the zeroes of complex analytic functions. If a complex analytic function vanishes at a point , but is not identically zero in a neighbourhood of that point, then by Taylor expansion we see that factors in a sufficiently small neighbourhood of as

for some natural number (which we call the *order* or *multiplicity* of the zero at ) and some function that is complex analytic and non-zero near ; this generalises the factor theorem for polynomials. In particular, the zero is isolated if does not vanish identically near . We conclude that if is connected and vanishes on a neighbourhood of some point in , then it must vanish on all of (since the maximal connected neighbourhood of in on which vanishes cannot have any boundary point in ). This implies unique continuation of analytic functions: if two complex analytic functions on agree on a non-empty open set, then they agree everywhere. In particular, if a complex analytic function does not vanish everywhere, then all of its zeroes are isolated, so in particular it has only finitely many zeroes on any given compact set.

Recall that a rational function is a function which is a quotient of two polynomials (at least outside of the set where vanishes). Analogously, let us define a meromorphic function on an open set to be a function defined outside of a discrete subset of (the *singularities* of ), which is locally the quotient of holomorphic functions, in the sense that for every , one has in a neighbourhood of excluding , with holomorphic near and with non-vanishing outside of . If and has a zero of equal or higher order than at , then the singularity is removable and one can extend the meromorphic function holomorphically across (by the holomorphic factor theorem (4)); otherwise, the singularity is non-removable and is known as a *pole*, whose order is equal to the difference between the order of and the order of at . (If one wished, one could extend meromorphic functions to the poles by embedding in the Riemann sphere and mapping each pole to , but we will not do so here. One could also consider non-meromorphic functions with essential singularities at various points, but we will have no need to analyse such singularities in this course.) If the order of a pole or zero is one, we say that it is *simple*; if it is two, we say it is *double*; and so forth.

Exercise 12Show that the space of meromorphic functions on a non-empty open set , quotiented by almost everywhere equivalence, forms a field.

By quotienting two Taylor series, we see that if a meromorphic function has a pole of order at some point , then it has a Laurent expansion

absolutely convergent in a neighbourhood of excluding itself, and with non-zero. The Laurent coefficient has a special significance, and is called the residue of the meromorphic function at , which we will denote as . The importance of this coefficient comes from the following significant generalisation of the Cauchy integral formula, known as the residue theorem:

Exercise 13 (Residue theorem)Let be a meromorphic function on a simply connected domain , and let be a closed contour in enclosing a bounded region anticlockwise, and avoiding all the singularities of . Show thatwhere is summed over all the poles of that lie in .

The residue theorem is particularly useful when applied to logarithmic derivatives of meromorphic functions , because the residue is of a specific form:

Exercise 14Let be a meromorphic function on an open set that does not vanish identically. Show that the only poles of are simple poles (poles of order ), occurring at the poles and zeroes of (after all removable singularities have been removed). Furthermore, the residue of at a pole is an integer, equal to the order of zero of if has a zero at , or equal to negative the order of pole at if has a pole at .

Remark 15The fact that residues of logarithmic derivatives of meromorphic functions are automatically integers is a remarkable feature of the complex analytic approach to multiplicative number theory, which is difficult (though not entirely impossible) to duplicate in other approaches to the subject. Here is a sample application of this integrality, which is challenging to reproduce by non-complex-analytic means: if is meromorphic near , and one has the bound as , then must in fact stay bounded near , because the only integer of magnitude less than is zero.

Given a function between two sets , we can form the graph

which is a subset of the Cartesian product .

There are a number of “closed graph theorems” in mathematics which relate the regularity properties of the function with the closure properties of the graph , assuming some “completeness” properties of the domain and range . The most famous of these is the closed graph theorem from functional analysis, which I phrase as follows:

Theorem 1 (Closed graph theorem (functional analysis))Let be complete normed vector spaces over the reals (i.e. Banach spaces). Then a function is a continuous linear transformation if and only if the graph is both linearly closed (i.e. it is a linear subspace of ) and topologically closed (i.e. closed in the product topology of ).

I like to think of this theorem as linking together qualitative and quantitative notions of regularity preservation properties of an operator ; see this blog post for further discussion.

The theorem is equivalent to the assertion that any continuous linear bijection from one Banach space to another is necessarily an isomorphism in the sense that the inverse map is also continuous and linear. Indeed, to see that this claim implies the closed graph theorem, one applies it to the projection from to , which is a continuous linear bijection; conversely, to deduce this claim from the closed graph theorem, observe that the graph of the inverse is the reflection of the graph of . As such, the closed graph theorem is a corollary of the open mapping theorem, which asserts that any continuous linear *surjection* from one Banach space to another is open. (Conversely, one can deduce the open mapping theorem from the closed graph theorem by quotienting out the kernel of the continuous surjection to get a bijection.)

It turns out that there is a closed graph theorem (or equivalent reformulations of that theorem, such as an assertion that bijective morphisms between sufficiently “complete” objects are necessarily isomorphisms, or as an open mapping theorem) in many other categories in mathematics as well. Here are some easy ones:

Theorem 2 (Closed graph theorem (linear algebra))Let be vector spaces over a field . Then a function is a linear transformation if and only if the graph is linearly closed.

Theorem 3 (Closed graph theorem (group theory))Let be groups. Then a function is a group homomorphism if and only if the graph is closed under the group operations (i.e. it is a subgroup of ).

Theorem 4 (Closed graph theorem (order theory))Let be totally ordered sets. Then a function is monotone increasing if and only if the graph is totally ordered (using the product order on ).

Remark 1Similar results to the above three theorems (with similarly easy proofs) hold for other algebraic structures, such as rings (using the usual product of rings), modules, algebras, or Lie algebras, groupoids, or even categories (a map between categories is a functor iff its graph is again a category). (ADDED IN VIEW OF COMMENTS: further examples include affine spaces and -sets (sets with an action of a given group ).) There are also various approximate versions of this theorem that are useful in arithmetic combinatorics, that relate the property of a map being an “approximate homomorphism” in some sense with its graph being an “approximate group” in some sense. This is particularly useful for this subfield of mathematics because there are currently more theorems about approximate groups than about approximate homomorphisms, so that one can profitably use closed graph theorems to transfer results about the former to results about the latter.

A slightly more sophisticated result in the same vein:

Theorem 5 (Closed graph theorem (point set topology))Let be compact Hausdorff spaces. Then a function is continuous if and only if the graph is topologically closed.

Indeed, the “only if” direction is easy, while for the “if” direction, note that if is a closed subset of , then it is compact Hausdorff, and the projection map from to is then a bijective continuous map between compact Hausdorff spaces, which is then closed, thus open, and hence a homeomorphism, giving the claim.

Note that the compactness hypothesis is necessary: for instance, the function defined by for and for is a function which has a closed graph, but is discontinuous.

A similar result (but relying on a much deeper theorem) is available in algebraic geometry, as I learned after asking this MathOverflow question:

Theorem 6 (Closed graph theorem (algebraic geometry))Let be normal projective varieties over an algebraically closed field of characteristic zero. Then a function is a regular map if and only if the graph is Zariski-closed.

*Proof:* (Sketch) For the only if direction, note that the map is a regular map from the projective variety to the projective variety and is thus a projective morphism, hence is proper. In particular, the image of under this map is Zariski-closed.

Conversely, if is Zariski-closed, then it is also a projective variety, and the projection is a projective morphism from to , which is clearly quasi-finite; by the characteristic zero hypothesis, it is also separated. Applying (Grothendieck’s form of) Zariski’s main theorem, this projection is the composition of an open immersion and a finite map. As projective varieties are complete, the open immersion is an isomorphism, and so the projection from to is finite. Being injective and separable, the degree of this finite map must be one, and hence and are isomorphic, hence (by normality of ) is contained in (the image of) , which makes the map from to regular, which makes regular.

The counterexample of the map given by for and demonstrates why the projective hypothesis is necessary. The necessity of the normality condition (or more precisely, a weak normality condition) is demonstrated by (the projective version of) the map from the cusipdal curve to . (If one restricts attention to smooth varieties, though, normality becomes automatic.) The necessity of characteristic zero is demonstrated by (the projective version of) the inverse of the Frobenius map on a field of characteristic .

There are also a number of closed graph theorems for topological groups, of which the following is typical (see Exercise 3 of these previous blog notes):

Theorem 7 (Closed graph theorem (topological group theory))Let be -compact, locally compact Hausdorff groups. Then a function is a continuous homomorphism if and only if the graph is both group-theoretically closed and topologically closed.

The hypotheses of being -compact, locally compact, and Hausdorff can be relaxed somewhat, but I doubt that they can be eliminated entirely (though I do not have a ready counterexample for this).

In several complex variables, it is a classical theorem (see e.g. Lemma 4 of this blog post) that a holomorphic function from a domain in to is locally injective if and only if it is a local diffeomorphism (i.e. its derivative is everywhere non-singular). This leads to a closed graph theorem for complex manifolds:

Theorem 8 (Closed graph theorem (complex manifolds))Let be complex manifolds. Then a function is holomorphic if and only if the graph is a complex manifold (using the complex structure inherited from ) of the same dimension as .

Indeed, one applies the previous observation to the projection from to . The dimension requirement is needed, as can be seen from the example of the map defined by for and .

(ADDED LATER:) There is a real analogue to the above theorem:

Theorem 9 (Closed graph theorem (real manifolds))Let be real manifolds. Then a function is continuous if and only if the graph is a real manifold of the same dimension as .

This theorem can be proven by applying invariance of domain (discussed in this previous post) to the projection of to , to show that it is open if has the same dimension as .

Note though that the analogous claim for *smooth* real manifolds fails: the function defined by has a smooth graph, but is not itself smooth.

(ADDED YET LATER:) Here is an easy closed graph theorem in the symplectic category:

Theorem 10 (Closed graph theorem (symplectic geometry))Let and be smooth symplectic manifolds of the same dimension. Then a smooth map is a symplectic morphism (i.e. ) if and only if the graph is a Lagrangian submanifold of with the symplectic form .

In view of the symplectic rigidity phenomenon, it is likely that the smoothness hypotheses on can be relaxed substantially, but I will not try to formulate such a result here.

There are presumably many further examples of closed graph theorems (or closely related theorems, such as criteria for inverting a morphism, or open mapping type theorems) throughout mathematics; I would be interested to know of further examples.

One of the most well known problems from ancient Greek mathematics was that of trisecting an angle by straightedge and compass, which was eventually proven impossible in 1837 by Pierre Wantzel, using methods from Galois theory.

Formally, one can set up the problem as follows. Define a *configuration* to be a finite collection of points, lines, and circles in the Euclidean plane. Define a *construction step* to be one of the following operations to enlarge the collection :

- (Straightedge) Given two distinct points in , form the line that connects and , and add it to .
- (Compass) Given two distinct points in , and given a third point in (which may or may not equal or ), form the circle with centre and radius equal to the length of the line segment joining and , and add it to .
- (Intersection) Given two distinct curves in (thus is either a line or a circle in , and similarly for ), select a point that is common to both and (there are at most two such points), and add it to .

We say that a point, line, or circle is *constructible by straightedge and compass* from a configuration if it can be obtained from after applying a finite number of construction steps.

Problem 1 (Angle trisection)Let be distinct points in the plane. Is it always possible to construct by straightedge and compass from a line through thattrisectsthe angle , in the sense that the angle between and is one third of the angle of ?

Thanks to Wantzel’s result, the answer to this problem is known to be “no” in general; a *generic* angle cannot be trisected by straightedge and compass. (On the other hand, some *special* angles can certainly be trisected by straightedge and compass, such as a right angle. Also, one can certainly trisect generic angles using other methods than straightedge and compass; see the Wikipedia page on angle trisection for some examples of this.)

The impossibility of angle trisection stands in sharp contrast to the easy construction of angle *bisection* via straightedge and compass, which we briefly review as follows:

- Start with three points .
- Form the circle with centre and radius , and intersect it with the line . Let be the point in this intersection that lies on the same side of as . ( may well be equal to ).
- Form the circle with centre and radius , and the circle with centre and radius . Let be the point of intersection of and that is not .
- The line will then bisect the angle .

The key difference between angle trisection and angle bisection ultimately boils down to the following trivial number-theoretic fact:

*Proof:* Obvious by modular arithmetic, by induction, or by the fundamental theorem of arithmetic.

In contrast, there are of course plenty of powers of that are evenly divisible by , and this is ultimately why angle bisection is easy while angle trisection is hard.

The standard way in which Lemma 2 is used to demonstrate the impossibility of angle trisection is via Galois theory. The implication is quite short if one knows this theory, but quite opaque otherwise. We briefly sketch the proof of this implication here, though we will not need it in the rest of the discussion. Firstly, Lemma 2 implies the following fact about field extensions.

Corollary 3Let be a field, and let be an extension of that can be constructed out of by a finite sequence of quadratic extensions. Then does not contain any cubic extensions of .

*Proof:* If contained a cubic extension of , then the dimension of over would be a multiple of three. On the other hand, if is obtained from by a tower of quadratic extensions, then the dimension of over is a power of two. The claim then follows from Lemma 2.

To conclude the proof, one then notes that any point, line, or circle that can be constructed from a configuration is definable in a field obtained from the coefficients of all the objects in after taking a finite number of quadratic extensions, whereas a trisection of an angle will generically only be definable in a cubic extension of the field generated by the coordinates of .

The Galois theory method also allows one to obtain many other impossibility results of this type, most famously the Abel-Ruffini theorem on the insolvability of the quintic equation by radicals. For this reason (and also because of the many applications of Galois theory to number theory and other branches of mathematics), the Galois theory argument is the “right” way to prove the impossibility of angle trisection within the broader framework of modern mathematics. However, this argument has the drawback that it requires one to first understand Galois theory (or at least field theory), which is usually not presented until an advanced undergraduate algebra or number theory course, whilst the angle trisection problem requires only high-school level mathematics to formulate. Even if one is allowed to “cheat” and sweep several technicalities under the rug, one still needs to possess a fair amount of solid intuition about advanced algebra in order to appreciate the proof. (This was undoubtedly one reason why, even after Wantzel’s impossibility result was published, a large amount of effort was still expended by amateur mathematicians to try to trisect a general angle.)

In this post I would therefore like to present a different proof (or perhaps more accurately, a disguised version of the standard proof) of the impossibility of angle trisection by straightedge and compass, that avoids explicit mention of Galois theory (though it is never far beneath the surface). With “cheats”, the proof is actually quite simple and geometric (except for Lemma 2, which is still used at a crucial juncture), based on the basic geometric concept of monodromy; unfortunately, some technical work is needed however to remove these cheats.

To describe the intuitive idea of the proof, let us return to the angle bisection construction, that takes a triple of points as input and returns a bisecting line as output. We iterate the construction to create a quadrisecting line , via the following sequence of steps that extend the original bisection construction:

- Start with three points .
- Form the circle with centre and radius , and intersect it with the line . Let be the point in this intersection that lies on the same side of as . ( may well be equal to ).
- Form the circle with centre and radius , and the circle with centre and radius . Let be the point of intersection of and that is not .
- Let be the point on the line which lies on , and is on the same side of as .
- Form the circle with centre and radius . Let be the point of intersection of and that is not .
- The line will then quadrisect the angle .

Let us fix the points and , but not , and view (as well as intermediate objects such as , , , , , , ) as a function of .

Let us now do the following: we begin rotating counterclockwise around , which drags around the other objects , , , , , , that were constructed by accordingly. For instance, here is an early stage of this rotation process, when the angle has become obtuse:

Now for the slightly tricky bit. We are going to keep rotating beyond a half-rotation of , so that now becomes a *reflex angle*. At this point, a singularity occurs; the point collides into , and so there is an instant in which the line is not well-defined. However, this turns out to be a *removable singularity* (and the easiest way to demonstrate this will be to tap the power of complex analysis, as complex numbers can easily route around such a singularity), and we can blast through it to the other side, giving a picture like this:

Note that we have now deviated from the original construction in that and are no longer on the same side of ; we are thus now working in a *continuation* of that construction rather than with the construction itself. Nevertheless, we can still work with this continuation (much as, say, one works with analytic continuations of infinite series such as beyond their original domain of definition).

We now keep rotating around . Here, is approaching a full rotation of :

When reaches a full rotation, a different singularity occurs: and coincide. Nevertheless, this is also a removable singularity, and we blast through to beyond a full rotation:

And now is back where it started, as are , , , and … but the point has moved, from one intersection point of to the other. As a consequence, , , and have also changed, with being at right angles to where it was before. (In the jargon of modern mathematics, the quadrisection construction has a non-trivial monodromy.)

But nothing stops us from rotating some more. If we continue this procedure, we see that after two full rotations of around , all points, lines, and circles constructed from have returned to their original positions. Because of this, we shall say that the quadrisection construction described above is *periodic with period *.

Similarly, if one performs an octisection of the angle by bisecting the quadrisection, one can verify that this octisection is periodic with period ; it takes four full rotations of around before the configuration returns to where it started. More generally, one can show

Proposition 4Any construction of straightedge and compass from the points is periodic with period equal to a power of .

The reason for this, ultimately, is because any two circles or lines will intersect each other in at most two points, and so at each step of a straightedge-and-compass construction there is an ambiguity of at most . Each rotation of around can potentially flip one of these points to the other, but then if one rotates again, the point returns to its original position, and then one can analyse the next point in the construction in the same fashion until one obtains the proposition.

But now consider a putative trisection operation, that starts with an arbitrary angle and somehow uses some sequence of straightedge and compass constructions to end up with a trisecting line :

What is the period of this construction? If we continuously rotate around , we observe that a full rotations of only causes the trisecting line to rotate by a third of a full rotation (i.e. by ):

Because of this, we see that the period of any construction that contains must be a multiple of . But this contradicts Proposition 4 and Lemma 2.

Below the fold, I will make the above proof rigorous. Unfortunately, in doing so, I had to again leave the world of high-school mathematics, as one needs a little bit of algebraic geometry and complex analysis to resolve the issues with singularities that we saw in the above sketch. Still, I feel that at an intuitive level at least, this argument is more geometric and accessible than the Galois-theoretic argument (though anyone familiar with Galois theory will note that there is really not that much difference between the proofs, ultimately, as one has simply replaced the Galois group with a closely related monodromy group instead).

The Riemann zeta function is defined in the region by the absolutely convergent series

Thus, for instance, it is known that , and thus

For , the series on the right-hand side of (1) is no longer absolutely convergent, or even conditionally convergent. Nevertheless, the function can be extended to this region (with a pole at ) by analytic continuation. For instance, it can be shown that after analytic continuation, one has , , and , and more generally

for , where are the Bernoulli numbers. If one *formally* applies (1) at these values of , one obtains the somewhat bizarre formulae

Clearly, these formulae do not make sense if one stays within the traditional way to evaluate infinite series, and so it seems that one is forced to use the somewhat unintuitive analytic continuation interpretation of such sums to make these formulae rigorous. But as it stands, the formulae look “wrong” for several reasons. Most obviously, the summands on the left are all positive, but the right-hand sides can be zero or negative. A little more subtly, the identities do not appear to be consistent with each other. For instance, if one adds (4) to (5), one obtains

whereas if one subtracts from (5) one obtains instead

and the two equations seem inconsistent with each other.

However, it is possible to interpret (4), (5), (6) by purely real-variable methods, without recourse to complex analysis methods such as analytic continuation, thus giving an “elementary” interpretation of these sums that only requires undergraduate calculus; we will later also explain how this interpretation deals with the apparent inconsistencies pointed out above.

To see this, let us first consider a convergent sum such as (2). The classical interpretation of this formula is the assertion that the partial sums

converge to as , or in other words that

where denotes a quantity that goes to zero as . Actually, by using the integral test estimate

we have the sharper result

Thus we can view as the leading coefficient of the asymptotic expansion of the partial sums of .

One can then try to inspect the partial sums of the expressions in (4), (5), (6), but the coefficients bear no obvious relationship to the right-hand sides:

For (7), the classical Faulhaber formula (or *Bernoulli formula*) gives

for , which has a vague resemblance to (7), but again the connection is not particularly clear.

The problem here is the discrete nature of the partial sum

which (if is viewed as a real number) has jump discontinuities at each positive integer value of . These discontinuities yield various artefacts when trying to approximate this sum by a polynomial in . (These artefacts also occur in (2), but happen in that case to be obscured in the error term ; but for the divergent sums (4), (5), (6), (7), they are large enough to cause real trouble.)

However, these issues can be resolved by replacing the abruptly truncated partial sums with smoothed sums , where is a *cutoff function*, or more precisely a compactly supported bounded function that equals at . The case when is the indicator function then corresponds to the traditional partial sums, with all the attendant discretisation artefacts; but if one chooses a smoother cutoff, then these artefacts begin to disappear (or at least become lower order), and the true asymptotic expansion becomes more manifest.

Note that smoothing does not affect the asymptotic value of sums that were already absolutely convergent, thanks to the dominated convergence theorem. For instance, we have

whenever is a cutoff function (since pointwise as and is uniformly bounded). If is equal to on a neighbourhood of the origin, then the integral test argument then recovers the decay rate:

However, smoothing can greatly improve the convergence properties of a divergent sum. The simplest example is Grandi’s series

The partial sums

oscillate between and , and so this series is not conditionally convergent (and certainly not absolutely convergent). However, if one performs analytic continuation on the series

and sets , one obtains a formal value of for this series. This value can also be obtained by smooth summation. Indeed, for any cutoff function , we can regroup

If is twice continuously differentiable (i.e. ), then from Taylor expansion we see that the summand has size , and also (from the compact support of ) is only non-zero when . This leads to the asymptotic

and so we recover the value of as the leading term of the asymptotic expansion.

Exercise 1Show that if is merely once continuously differentiable (i.e. ), then we have a similar asymptotic, but with an error term of instead of . This is an instance of a more general principle that smoother cutoffs lead to better error terms, though the improvement sometimes stops after some degree of regularity.

Remark 1The most famous instance of smoothed summation is Cesáro summation, which corresponds to the cutoff function . Unsurprisingly, when Cesáro summation is applied to Grandi’s series, one again recovers the value of .

If we now revisit the divergent series (4), (5), (6), (7) with smooth summation in mind, we finally begin to see the origin of the right-hand sides. Indeed, for any fixed smooth cutoff function , we will shortly show that

for any fixed where is the Archimedean factor

(which is also essentially the Mellin transform of ). Thus we see that the values (4), (5), (6), (7) obtained by analytic continuation are nothing more than the constant terms of the asymptotic expansion of the *smoothed* partial sums. This is not a coincidence; we will explain the equivalence of these two interpretations of such sums (in the model case when the analytic continuation has only finitely many poles and does not grow too fast at infinity) below the fold.

This interpretation clears up the apparent inconsistencies alluded to earlier. For instance, the sum consists only of non-negative terms, as does its smoothed partial sums (if is non-negative). Comparing this with (12), we see that this forces the highest-order term to be non-negative (as indeed it is), but does not prohibit the *lower-order* constant term from being negative (which of course it is).

Similarly, if we add together (12) and (11) we obtain

while if we subtract from (12) we obtain

These two asymptotics are not inconsistent with each other; indeed, if we shift the index of summation in (17), we can write

and so we now see that the discrepancy between the two sums in (8), (9) come from the shifting of the cutoff , which is invisible in the formal expressions in (8), (9) but become manifestly present in the smoothed sum formulation.

Exercise 2By Taylor expanding and using (11), (18) show that (16) and (17) are indeed consistent with each other, and in particular one can deduce the latter from the former.

Jean-Pierre Serre (whose papers are, of course, always worth reading) recently posted a lovely lecture on the arXiv entitled “How to use finite fields for problems concerning infinite fields”. In it, he describes several ways in which algebraic statements over fields of zero characteristic, such as , can be deduced from their positive characteristic counterparts such as , despite the fact that there is no non-trivial field homomorphism between the two types of fields. In particular finitary tools, including such basic concepts as cardinality, can now be deployed to establish infinitary results. This leads to some simple and elegant proofs of non-trivial algebraic results which are not easy to establish by other means.

One deduction of this type is based on the idea that positive characteristic fields can partially *model* zero characteristic fields, and proceeds like this: if a certain algebraic statement failed over (say) , then there should be a “finitary algebraic” obstruction that “witnesses” this failure over . Because this obstruction is both finitary and algebraic, it must also be definable in some (large) finite characteristic, thus leading to a comparable failure over a finite characteristic field. Taking contrapositives, one obtains the claim.

Algebra is definitely not my own field of expertise, but it is interesting to note that similar themes have also come up in my own area of additive combinatorics (and more generally arithmetic combinatorics), because the combinatorics of addition and multiplication on finite sets is definitely of a “finitary algebraic” nature. For instance, a recent paper of Vu, Wood, and Wood establishes a finitary “Freiman-type” homomorphism from (finite subsets of) the complex numbers to large finite fields that allows them to pull back many results in arithmetic combinatorics in finite fields (e.g. the sum-product theorem) to the complex plane. (Van Vu and I also used a similar trick to control the singularity property of random sign matrices by first mapping them into finite fields in which cardinality arguments became available.) And I have a particular fondness for correspondences between finitary and infinitary mathematics; the correspondence Serre discusses is slightly different from the one I discuss for instance in here or here, although there seems to be a common theme of “compactness” (or of model theory) tying these correspondences together.

As one of his examples, Serre cites one of my own favourite results in algebra, discovered independently by Ax and by Grothendieck (and then rediscovered many times since). Here is a special case of that theorem:

Theorem 1 (Ax-Grothendieck theorem, special case)Let be a polynomial map from a complex vector space to itself. If is injective, then is bijective.

The full version of the theorem allows one to replace by an algebraic variety over any algebraically closed field, and for to be an morphism from the algebraic variety to itself, but for simplicity I will just discuss the above special case. This theorem is not at all obvious; it is not too difficult (see Lemma 4 below) to show that the Jacobian of is non-degenerate, but this does not come close to solving the problem since one would then be faced with the notorious Jacobian conjecture. Also, the claim fails if “polynomial” is replaced by “holomorphic”, due to the existence of Fatou-Bieberbach domains.

In this post I would like to give the proof of Theorem 1 based on finite fields as mentioned by Serre, as well as another elegant proof of Rudin that combines algebra with some elementary complex variable methods. (There are several other proofs of this theorem and its generalisations, for instance a topological proof by Borel, which I will not discuss here.)

*Update, March 8: Some corrections to the finite field proof. Thanks to Matthias Aschenbrenner also for clarifying the relationship with Tarski’s theorem and some further references.*

*[This post was typeset using a LaTeX to WordPress-HTML converter kindly provided to me by Luca Trevisan.]*

Many properties of a (sufficiently nice) function are reflected in its Fourier transform , defined by the formula

For instance, decay properties of are reflected in smoothness properties of , as the following table shows:

If is… | then is… | and this relates to… |

Square-integrable | square-integrable | Plancherel’s theorem |

Absolutely integrable | continuous | Riemann-Lebesgue lemma |

Rapidly decreasing | smooth | theory of Schwartz functions |

Exponentially decreasing | analytic in a strip | |

Compactly supported | entire and at most exponential growth | Paley-Wiener theorem |

Another important relationship between a function and its Fourier transform is the *uncertainty principle*, which roughly asserts that if a function is highly localised in space, then its Fourier transform must be widely dispersed in space, or to put it another way, and cannot both decay too strongly at infinity (except of course in the degenerate case ). There are many ways to make this intuition precise. One of them is the Heisenberg uncertainty principle, which asserts that if we normalise

then we must have

thus forcing at least one of or to not be too concentrated near the origin. This principle can be proven (for sufficiently nice , initially) by observing the integration by parts identity

and then using Cauchy-Schwarz and the Plancherel identity.

Another well known manifestation of the uncertainty principle is the fact that it is not possible for and to both be compactly supported (unless of course they vanish entirely). This can be in fact be seen from the above table: if is compactly supported, then is an entire function; but the zeroes of a non-zero entire function are isolated, yielding a contradiction unless vanishes. (Indeed, the table also shows that if one of and is compactly supported, then the other cannot have exponential decay.)

On the other hand, we have the example of the Gaussian functions , , which both decay faster than exponentially. The classical *Hardy uncertainty principle* asserts, roughly speaking, that this is the fastest that and can simultaneously decay:

Theorem 1 (Hardy uncertainty principle)Suppose that is a (measurable) function such that and for all and some . Then is a scalar multiple of the gaussian .

This theorem is proven by complex-analytic methods, in particular the Phragmén-Lindelöf principle; for sake of completeness we give that proof below. But I was curious to see if there was a real-variable proof of the same theorem, avoiding the use of complex analysis. I was able to find the proof of a slightly weaker theorem:

Theorem 2 (Weak Hardy uncertainty principle)Suppose that is a non-zero (measurable) function such that and for all and some . Then for some absolute constant .

Note that the correct value of should be , as is implied by the true Hardy uncertainty principle. Despite the weaker statement, I thought the proof might still might be of interest as it is a little less “magical” than the complex-variable one, and so I am giving it below.

In the second of the Distinguished Lecture Series given by Eli Stein here at UCLA, Eli expanded on the themes in the first lecture, in particular providing more details as to the recent (not yet published) results of Lanzani and Stein on the boundedness of the Cauchy integral on domains in several complex variables.

## Recent Comments