You are currently browsing the category archive for the ‘math.CA’ category.

The equidistribution theorem asserts that if is an irrational phase, then the sequence is equidistributed on the unit circle, or equivalently that

for any continuous (or equivalently, for any smooth) function . By approximating uniformly by a Fourier series, this claim is equivalent to that of showing that

for any non-zero integer (where ), which is easily verified from the irrationality of and the geometric series formula. Conversely, if is rational, then clearly fails to go to zero when is a multiple of the denominator of .

One can then ask for more quantitative information about the decay of exponential sums of , or more generally on exponential sums of the form for an arithmetic progression (in this post all progressions are understood to be finite) and a polynomial . It will be convenient to phrase such information in the form of an *inverse theorem*, describing those phases for which the exponential sum is large. Indeed, we have

Lemma 1 (Geometric series formula, inverse form)Let be an arithmetic progression of length at most for some , and let be a linear polynomial for some . Iffor some , then there exists a subprogression of of size such that varies by at most on (that is to say, lies in a subinterval of of length at most ).

*Proof:* By a linear change of variable we may assume that is of the form for some . We may of course assume that is non-zero in , so that ( denotes the distance from to the nearest integer). From the geometric series formula we see that

and so . Setting for some sufficiently small absolute constant , we obtain the claim.

Thus, in order for a linear phase to fail to be equidistributed on some long progression , must in fact be almost constant on large piece of .

As is well known, this phenomenon generalises to higher order polynomials. To achieve this, we need two elementary additional lemmas. The first relates the exponential sums of to the exponential sums of its “first derivatives” .

Lemma 2 (Van der Corput lemma, inverse form)Let be an arithmetic progression of length at most , and let be an arbitrary function such that

for some . Then, for integers , there exists a subprogression of , of the same spacing as , such that

*Proof:* Squaring (1), we see that

We write and conclude that

where is a subprogression of of the same spacing. Since , we conclude that

for values of (this can be seen, much like the pigeonhole principle, by arguing via contradiction for a suitable choice of implied constants). The claim follows.

The second lemma (which we recycle from this previous blog post) is a variant of the equidistribution theorem.

Lemma 3 (Vinogradov lemma)Let be an interval for some , and let be such that for at least values of , for some . Then eitheror

or else there is a natural number such that

*Proof:* We may assume that and , since we are done otherwise. Then there are at least two with , and by the pigeonhole principle we can find in with and . By the triangle inequality, we conclude that there exists at least one natural number for which

We take to be minimal amongst all such natural numbers, then we see that there exists coprime to and such that

If then we are done, so suppose that . Suppose that are elements of such that and . Writing for some , we have

By hypothesis, ; note that as and we also have . This implies that and thus . We then have

We conclude that for fixed with , there are at most elements of such that . Iterating this with a greedy algorithm, we see that the number of with is at most ; since , this implies that

and the claim follows.

Now we can quickly obtain a higher degree version of Lemma 1:

Proposition 4 (Weyl exponential sum estimate, inverse form)Let be an arithmetic progression of length at most for some , and let be a polynomial of some degree at most . Iffor some , then there exists a subprogression of with such that varies by at most on .

*Proof:* We induct on . The cases are immediate from Lemma 1. Now suppose that , and that the claim had already been proven for . To simplify the notation we allow implied constants to depend on . Let the hypotheses be as in the proposition. Clearly cannot exceed . By shrinking as necessary we may assume that for some sufficiently small constant depending on .

By rescaling we may assume . By Lemma 3, we see that for choices of such that

for some interval . We write , then is a polynomial of degree at most with leading coefficient . We conclude from induction hypothesis that for each such , there exists a natural number such that , by double-counting, this implies that there are integers in the interval such that . Applying Lemma 3, we conclude that either , or that

In the former case the claim is trivial (just take to be a point), so we may assume that we are in the latter case.

We partition into arithmetic progressions of spacing and length comparable to for some large depending on to be chosen later. By hypothesis, we have

so by the pigeonhole principle, we have

for at least one such progression . On this progression, we may use the binomial theorem and (4) to write as a polynomial in of degree at most , plus an error of size . We thus can write for for some polynomial of degree at most . By the triangle inequality, we thus have (for large enough) that

and hence by induction hypothesis we may find a subprogression of of size such that varies by most on , and thus (for large enough again) that varies by at most on , and the claim follows.

This gives the following corollary (also given as Exercise 16 in this previous blog post):

Corollary 5 (Weyl exponential sum estimate, inverse form II)Let be a discrete interval for some , and let polynomial of some degree at most for some . Iffor some , then there is a natural number such that for all .

One can obtain much better exponents here using Vinogradov’s mean value theorem; see Theorem 1.6 this paper of Wooley. (Thanks to Mariusz Mirek for this reference.) However, this weaker result already suffices for many applications, and does not need any result as deep as the mean value theorem.

*Proof:* To simplify notation we allow implied constants to depend on . As before, we may assume that for some small constant depending only on . We may also assume that for some large , as the claim is trivial otherwise (set ).

Applying Proposition 4, we can find a natural number and an arithmetic subprogression of such that and such that varies by at most on . Writing for some interval of length and some , we conclude that the polynomial varies by at most on . Taking order differences, we conclude that the coefficient of this polynomial is ; by the binomial theorem, this implies that differs by at most on from a polynomial of degree at most . Iterating this, we conclude that the coefficient of is for , and the claim then follows by inverting the change of variables (and replacing with a larger quantity such as as necessary).

For future reference we also record a higher degree version of the Vinogradov lemma.

Lemma 6 (Polynomial Vinogradov lemma)Let be a discrete interval for some , and let be a polynomial of degree at most for some such that for at least values of , for some . Then either

or else there is a natural number such that

for all .

*Proof:* We induct on . For this follows from Lemma 3 (noting that if then ), so suppose that and that the claim is already proven for . We now allow all implied constants to depend on .

For each , let denote the number of such that . By hypothesis, , and clearly , so we must have for choices of . For each such , we then have for choices of , so by induction hypothesis, either (5) or (6) holds, or else for choices of , there is a natural number such that

for , where are the coefficients of the degree polynomial . We may of course assume it is the latter which holds. By the pigeonhole principle we may take to be independent of .

Since , we have

for choices of , so by Lemma 3, either (5) or (6) holds, or else (after increasing as necessary) we have

We can again assume it is the latter that holds. This implies that modulo , so that

for choices of . Arguing as before and iterating, we obtain the claim.

The above results also extend to higher dimensions. Here is the higher dimensional version of Proposition 4:

Proposition 7 (Multidimensional Weyl exponential sum estimate, inverse form)Let and , and let be arithmetic progressions of length at most for each . Let be a polynomial of degrees at most in each of the variables separately. Iffor some , then there exists a subprogression of with for each such that varies by at most on .

A much more general statement, in which the polynomial phase is replaced by a nilsequence, and in which one does not necessarily assume the exponential sum is small, is given in Theorem 8.6 of this paper of Ben Green and myself, but it involves far more notation to even state properly.

*Proof:* We induct on . The case was established in Proposition 5, so we assume that and that the claim has already been proven for . To simplify notation we allow all implied constants to depend on . We may assume that for some small depending only on .

By a linear change of variables, we may assume that for all .

We write . First suppose that . Then by the pigeonhole principle we can find such that

and the claim then follows from the induction hypothesis. Thus we may assume that for some large depending only on . Similarly we may assume that for all .

By the triangle inequality, we have

The inner sum is , and the outer sum has terms. Thus, for choices of , one has

for some polynomials of degrees at most in the variables . For each obeying (7), we apply Corollary 5 to conclude that there exists a natural number such that

for (the claim also holds for but we discard it as being trivial). By the pigeonhole principle, there thus exists a natural number such that

for all and for choices of . If we write

where is a polynomial of degrees at most , then for choices of we then have

Applying Lemma 6 in the and the largeness hypotheses on the (and also the assumption that ) we conclude (after enlarging as necessary, and pigeonholing to keep independent of ) that

for all (note that we now include that case, which is no longer trivial) and for choices of . Iterating this, we eventually conclude (after enlarging as necessary) that

whenever for , with nonzero. Permuting the indices, and observing that the claim is trivial for , we in fact obtain (8) for all , at which point the claim easily follows by taking for each .

An inspection of the proof of the above result (or alternatively, by combining the above result again with many applications of Lemma 6) reveals the following general form of Proposition 4, which was posed as Exercise 17 in this previous blog post, but had a slight misprint in it (it did not properly treat the possibility that some of the could be small) and was a bit trickier to prove than anticipated (in fact, the reason for this post was that I was asked to supply a more detailed solution for this exercise):

Proposition 8 (Multidimensional Weyl exponential sum estimate, inverse form, II)Let be an natural number, and for each , let be a discrete interval for some . Letbe a polynomial in variables of multidegrees for some . If

for some , or else there is a natural number such that

Again, the factor of is natural in this bound. In the case, the option (10) may be deleted since (11) trivially holds in this case, but this simplification is no longer available for since one needs (10) to hold for *all* (not just one ) to make (11) completely trivial. Indeed, the above proposition fails for if one removes (10) completely, as can be seen for instance by inspecting the exponential sum , which has size comparable to regardless of how irrational is.

Here’s a cute identity I discovered by accident recently. Observe that

and so one can conjecture that one has

when is even, and

when is odd. This is obvious in the even case since is a polynomial of degree , but I struggled for a while with the odd case before finding a slick three-line proof. (I was first trying to prove the weaker statement that was non-negative, but for some strange reason I was only able to establish this by working out the derivative exactly, rather than by using more analytic methods, such as convexity arguments.) I thought other readers might like the challenge (and also I’d like to see some other proofs), so rather than post my own proof immediately, I’ll see if anyone would like to supply their own proofs or thoughts in the comments. Also I am curious to know if this identity is connected to any other existing piece of mathematics.

I’ve just uploaded to the arXiv my paper “Cancellation for the multilinear Hilbert transform“, submitted to Collectanea Mathematica. This paper uses methods from additive combinatorics (and more specifically, the arithmetic regularity and counting lemmas from this paper of Ben Green and myself) to obtain a slight amount of progress towards the open problem of obtaining bounds for the trilinear and higher Hilbert transforms (as discussed in this previous blog post). For instance, the trilinear Hilbert transform

is not known to be bounded for any to , although it is conjectured to do so when and . (For well below , one can use additive combinatorics constructions to demonstrate unboundedness; see this paper of Demeter.) One can approach this problem by considering the truncated trilinear Hilbert transforms

for . It is not difficult to show that the boundedness of is equivalent to the boundedness of with bounds that are uniform in and . On the other hand, from Minkowski’s inequality and Hölder’s inequality one can easily obtain the *non-uniform* bound of for . The main result of this paper is a slight improvement of this trivial bound to as . Roughly speaking, the way this gain is established is as follows. First there are some standard time-frequency type reductions to reduce to the task of obtaining some non-trivial cancellation on a single “tree”. Using a “generalised von Neumann theorem”, we show that such cancellation will happen if (a discretised version of) one or more of the functions (or a dual function that it is convenient to test against) is small in the Gowers norm. However, the arithmetic regularity lemma alluded to earlier allows one to represent an arbitrary function , up to a small error, as the sum of such a “Gowers uniform” function, plus a structured function (or more precisely, an *irrational virtual nilsequence*). This effectively reduces the problem to that of establishing some cancellation in a single tree in the case when all functions involved are irrational virtual nilsequences. At this point, the contribution of each component of the tree can be estimated using the “counting lemma” from my paper with Ben. The main term in the asymptotics is a certain integral over a nilmanifold, but because the kernel in the trilinear Hilbert transform is odd, it turns out that this integral vanishes, giving the required cancellation.

The same argument works for higher order Hilbert transforms (and one can also replace the coefficients in these transforms with other rational constants). However, because the quantitative bounds in the arithmetic regularity and counting lemmas are so poor, it does not seem likely that one can use these methods to remove the logarithmic growth in entirely, and some additional ideas will be needed to resolve the full conjecture.

I’ve just uploaded to the arXiv my paper “Failure of the pointwise and maximal ergodic theorems for the free group“, submitted to Forum of Mathematics, Sigma. This paper concerns a variant of the pointwise ergodic theorem of Birkhoff, which asserts that if one has a measure-preserving shift map on a probability space , then for any , the averages converge pointwise almost everywhere. (In the important case when the shift map is ergodic, the pointwise limit is simply the mean of the original function .)

The pointwise ergodic theorem can be extended to measure-preserving actions of other amenable groups, if one uses a suitably “tempered” Folner sequence of averages; see this paper of Lindenstrauss for more details. (I also wrote up some notes on that paper here, back in 2006 before I had started this blog.) But the arguments used to handle the amenable case break down completely for non-amenable groups, and in particular for the free non-abelian group on two generators.

Nevo and Stein studied this problem and obtained a number of pointwise ergodic theorems for -actions on probability spaces . For instance, for the spherical averaging operators

(where denotes the length of the reduced word that forms ), they showed that converged pointwise almost everywhere provided that was in for some . (The need to restrict to spheres of even radius can be seen by considering the action of on the two-element set in which both generators of act by interchanging the elements, in which case is determined by the parity of .) This result was reproven with a different and simpler proof by Bufetov, who also managed to relax the condition to the weaker condition .

The question remained open as to whether the pointwise ergodic theorem for -actions held if one only assumed that was in . Nevo and Stein were able to establish this for the Cesáro averages , but not for itself. About six years ago, Assaf Naor and I tried our hand at this problem, and was able to show an associated maximal inequality on , but due to the non-amenability of , this inequality did not transfer to and did not have any direct impact on this question, despite a fair amount of effort on our part to attack it.

Inspired by some recent conversations with Lewis Bowen, I returned to this problem. This time around, I tried to construct a counterexample to the pointwise ergodic theorem – something Assaf and I had not seriously attempted to do (perhaps due to being a bit too enamoured of our maximal inequality). I knew of an existing counterexample of Ornstein regarding a failure of an ergodic theorem for iterates of a self-adjoint Markov operator – in fact, I had written some notes on this example back in 2007. Upon revisiting my notes, I soon discovered that the Ornstein construction was adaptable to the setting, thus settling the problem in the negative:

Theorem 1 (Failure of pointwise ergodic theorem)There exists a measure-preserving -action on a probability space and a non-negative function such that for almost every .

To describe the proof of this theorem, let me first briefly sketch the main ideas of Ornstein’s construction, which gave an example of a self-adjoint Markov operator on a probability space and a non-negative such that for almost every . By some standard manipulations, it suffices to show that for any given and , there exists a self-adjoint Markov operator on a probability space and a non-negative with , such that on a set of measure at least . Actually, it will be convenient to replace the Markov chain with an *ancient Markov chain* – that is to say, a sequence of non-negative functions for both positive and negative , such that for all . The purpose of requiring the Markov chain to be ancient (that is, to extend infinitely far back in time) is to allow for the Markov chain to be shifted arbitrarily in time, which is key to Ornstein’s construction. (Technically, Ornstein’s original argument only uses functions that go back to a large negative time, rather than being infinitely ancient, but I will gloss over this point for sake of discussion, as it turns out that the version of the argument can be run using infinitely ancient chains.)

For any , let denote the claim that for any , there exists an ancient Markov chain with such that on a set of measure at least . Clearly holds since we can just take for all . Our objective is to show that holds for arbitrarily small . The heart of Ornstein’s argument is then the implication

for any , which upon iteration quickly gives the desired claim.

Let’s see informally how (1) works. By hypothesis, and ignoring epsilons, we can find an ancient Markov chain on some probability space of total mass , such that attains the value of or greater almost everywhere. Assuming that the Markov process is irreducible, the will eventually converge as to the constant value of , in particular its final state will essentially stay above (up to small errors).

Now suppose we duplicate the Markov process by replacing with a double copy (giving the uniform probability measure), and using the disjoint sum of the Markov operators on and as the propagator, so that there is no interaction between the two components of this new system. Then the functions form an ancient Markov chain of mass at most that lives solely in the first half of this copy, and attains the value of or greater on almost all of the first half , but is zero on the second half. The final state of will be to stay above in the first half , but be zero on the second half.

Now we modify the above example by allowing an infinitesimal amount of interaction between the two halves , of the system (I mentally think of and as two identical boxes that a particle can bounce around in, and now we wish to connect the boxes by a tiny tube). The precise way in which this interaction is inserted is not terribly important so long as the new Markov process is irreducible. Once one does so, then the ancient Markov chain in the previous example gets replaced by a slightly different ancient Markov chain which is more or less identical with for negative times , or for bounded positive times , but for very large values of the final state is now constant across the entire state space , and will stay above on this space.

Finally, we consider an ancient Markov chain which is basically of the form

for some large parameter and for all (the approximation becomes increasingly inaccurate for much larger than , but never mind this for now). This is basically two copies of the original Markov process in separate, barely interacting state spaces , but with the second copy delayed by a large time delay , and also attenuated in amplitude by a factor of . The total mass of this process is now . Because of the component of , we see that basically attains the value of or greater on the first half . On the second half , we work with times close to . If is large enough, would have averaged out to about at such times, but the component can get as large as here. Summing (and continuing to ignore various epsilon losses), we see that can get as large as on almost all of the second half of . This concludes the rough sketch of how one establishes the implication (1).

It was observed by Bufetov that the spherical averages for a free group action can be lifted up to become powers of a Markov operator, basically by randomly assigning a “velocity vector” to one’s base point and then applying the Markov process that moves along that velocity vector (and then randomly changing the velocity vector at each time step to the “reduced word” condition that the velocity never flips from to ). Thus the spherical average problem has a Markov operator interpretation, which opens the door to adapting the Ornstein construction to the setting of systems. This turns out to be doable after a certain amount of technical artifice; the main thing is to work with -measure preserving systems that admit ancient Markov chains that are initially supported in a very small region in the “interior” of the state space, so that one can couple such systems to each other “at the boundary” in the fashion needed to establish the analogue of (1) without disrupting the ancient dynamics of such chains. The initial such system (used to establish the base case ) comes from basically considering the action of on a (suitably renormalised) “infinitely large ball” in the Cayley graph, after suitably gluing together the boundary of this ball to complete the action. The ancient Markov chain associated to this system starts at the centre of this infinitely large ball at infinite negative time , and only reaches the boundary of this ball at the time .

The lonely runner conjecture is the following open problem:

Conjecture 1Suppose one has runners on the unit circle , all starting at the origin and moving at different speeds. Then for each runner, there is at least one time for which that runner is “lonely” in the sense that it is separated by a distance at least from all other runners.

One can normalise the speed of the lonely runner to be zero, at which point the conjecture can be reformulated (after replacing by ) as follows:

Conjecture 2Let be non-zero real numbers for some . Then there exists a real number such that the numbers are all a distance at least from the integers, thus where denotes the distance of to the nearest integer.

This conjecture has been proven for , but remains open for larger . The bound is optimal, as can be seen by looking at the case and applying the Dirichlet approximation theorem. Note that for each non-zero , the set has (Banach) density for any , and from this and the union bound we can easily find for which

for any , but it has proven to be quite challenging to remove the factor of to increase to . (As far as I know, even improving to for some absolute constant and sufficiently large remains open.)

The speeds in the above conjecture are arbitrary non-zero reals, but it has been known for some time that one can reduce without loss of generality to the case when the are rationals, or equivalently (by scaling) to the case where they are integers; see e.g. Section 4 of this paper of Bohman, Holzman, and Kleitman.

In this post I would like to remark on a slight refinement of this reduction, in which the speeds are integers of *bounded size*, where the bound depends on . More precisely:

Proposition 3In order to prove the lonely runner conjecture, it suffices to do so under the additional assumption that the are integers of size at most , where is an (explicitly computable) absolute constant. (More precisely: if this restricted version of the lonely runner conjecture is true for all , then the original version of the conjecture is also true for all .)

In principle, this proposition allows one to verify the lonely runner conjecture for a given in finite time; however the number of cases to check with this proposition grows faster than exponentially in , and so this is unfortunately not a feasible approach to verifying the lonely runner conjecture for more values of than currently known.

One of the key tools needed to prove this proposition is the following additive combinatorics result. Recall that a *generalised arithmetic progression* (or ) in the reals is a set of the form

for some and ; the quantity is called the *rank* of the progression. If , the progression is said to be *-proper* if the sums with for are all distinct. We have

Lemma 4 (Progressions lie inside proper progressions)Let be a GAP of rank in the reals, and let . Then is contained in a -proper GAP of rank at most , with

*Proof:* See Theorem 2.1 of this paper of Bilu. (Very similar results can also be found in Theorem 3.40 of my book with Van Vu, or Theorem 1.10 of this paper of mine with Van Vu.)

Now let , and assume inductively that the lonely runner conjecture has been proven for all smaller values of , as well as for the current value of in the case that are integers of size at most for some sufficiently large . We will show that the lonely runner conjecture holds in general for this choice of .

let be non-zero real numbers. Let be a large absolute constant to be chosen later. From the above lemma applied to the GAP , one can find a -proper GAP of rank at most containing such that

in particular if is large enough depending on .

We write

for some , , and . We thus have for , where is the linear map and are non-zero and lie in the box .

We now need an elementary lemma that allows us to create a “collision” between two of the via a linear projection, without making any of the collide with the origin:

Lemma 5Let be non-zero vectors that are not all collinear with the origin. Then, after replacing one or more of the with their negatives if necessary, there exists a pair such that , and such that none of the is a scalar multiple of .

*Proof:* We may assume that , since the case is vacuous. Applying a generic linear projection to (which does not affect collinearity, or the property that a given is a scalar multiple of ), we may then reduce to the case .

By a rotation and relabeling, we may assume that lies on the negative -axis; by flipping signs as necessary we may then assume that all of the lie in the closed right half-plane. As the are not all collinear with the origin, one of the lies off of the -axis, by relabeling, we may assume that lies off of the axis and makes a minimal angle with the -axis. Then the angle of with the -axis is non-zero but smaller than any non-zero angle that any of the make with this axis, and so none of the are a scalar multiple of , and the claim follows.

We now return to the proof of the proposition. If the are all collinear with the origin, then lie in a one-dimensional arithmetic progression , and then by rescaling we may take the to be integers of magnitude at most , at which point we are done by hypothesis. Thus, we may assume that the are not all collinear with the origin, and so by the above lemma and relabeling we may assume that is non-zero, and that none of the are scalar multiples of .

with for ; by relabeling we may assume without loss of generality that is non-zero, and furthermore that

where is a natural number and have no common factor.

We now define a variant of by the map

where the are real numbers that are linearly independent over , whose precise value will not be of importance in our argument. This is a linear map with the property that , so that consists of at most distinct real numbers, which are non-zero since none of the are scalar multiples of , and the are linearly independent over . As we are assuming inductively that the lonely runner conjecture holds for , we conclude (after deleting duplicates) that there exists at least one real number such that

We would like to “approximate” by to then conclude that there is at least one real number such that

It turns out that we can do this by a Fourier-analytic argument taking advantage of the -proper nature of . Firstly, we see from the Dirichlet approximation theorem that one has

for a set of reals of (Banach) density . Thus, by the triangle inequality, we have

for a set of reals of density .

Applying a smooth Fourier multiplier of Littlewood-Paley type, one can find a trigonometric polynomial

which takes values in , is for , and is no larger than for . We then have

where denotes the mean value of a quasiperiodic function on the reals . We expand the left-hand side out as

From the genericity of , we see that the constraint

occurs if and only if is a scalar multiple of , or equivalently (by (1), (2)) an integer multiple of . Thus

By Fourier expansion and writing , we may write (4) as

The support of the implies that . Because of the -properness of , we see (for large enough) that the equation

and conversely that (7) implies that (6) holds for some with . From (3) we thus have

In particular, there exists a such that

Since is bounded in magnitude by , and is bounded by , we thus have

for each , which by the size properties of implies that for all , giving the lonely runner conjecture for .

The von Neumann ergodic theorem (the Hilbert space version of the mean ergodic theorem) asserts that if is a unitary operator on a Hilbert space , and is a vector in that Hilbert space, then one has

in the strong topology, where is the -invariant subspace of , and is the orthogonal projection to . (See e.g. these previous lecture notes for a proof.) The same proof extends to more general amenable groups: if is a countable amenable group acting on a Hilbert space by unitary transformations for , and is a vector in that Hilbert space, then one has

for any Folner sequence of , where is the -invariant subspace, and is the average of on . Thus one can interpret as a certain average of elements of the orbit of .

In a previous blog post, I noted a variant of this ergodic theorem (due to Alaoglu and Birkhoff) that holds even when the group is not amenable (or not discrete), using a more abstract notion of averaging:

Theorem 1 (Abstract ergodic theorem)Let be an arbitrary group acting unitarily on a Hilbert space , and let be a vector in . Then is the element in the closed convex hull of of minimal norm, and is also the unique element of in this closed convex hull.

I recently stumbled upon a different way to think about this theorem, in the additive case when is abelian, which has a closer resemblance to the classical mean ergodic theorem. Given an arbitrary additive group (not necessarily discrete, or countable), let denote the collection of finite non-empty multisets in – that is to say, unordered collections of elements of , not necessarily distinct, for some positive integer . Given two multisets , in , we can form the sum set . Note that the sum set can contain multiplicity even when do not; for instance, . Given a multiset in , and a function from to a vector space , we define the average as

Note that the multiplicity function of the set affects the average; for instance, we have , but .

We can define a directed set on as follows: given two multisets , we write if we have for some . Thus for instance we have . It is easy to verify that this operation is transitive and reflexive, and is directed because any two elements of have a common upper bound, namely . (This is where we need to be abelian.) The notion of convergence along a net, now allows us to define the notion of convergence along ; given a family of points in a topological space indexed by elements of , and a point in , we say that *converges* to along if, for every open neighbourhood of in , one has for sufficiently large , that is to say there exists such that for all . If the topological space is Hausdorff, then the limit is unique (if it exists), and we then write

When takes values in the reals, one can also define the limit superior or limit inferior along such nets in the obvious fashion.

We can then give an alternate formulation of the abstract ergodic theorem in the abelian case:

Theorem 2 (Abelian abstract ergodic theorem)Let be an arbitrary additive group acting unitarily on a Hilbert space , and let be a vector in . Then we havein the strong topology of .

*Proof:* Suppose that , so that for some , then

so by unitarity and the triangle inequality we have

thus is monotone non-increasing in . Since this quantity is bounded between and , we conclude that the limit exists. Thus, for any , we have for sufficiently large that

for all . In particular, for any , we have

We can write

and so from the parallelogram law and unitarity we have

for all , and hence by the triangle inequality (averaging over a finite multiset )

for any . This shows that is a Cauchy sequence in (in the strong topology), and hence (by the completeness of ) tends to a limit. Shifting by a group element , we have

and hence is invariant under shifts, and thus lies in . On the other hand, for any and , we have

and thus on taking strong limits

and so is orthogonal to . Combining these two facts we see that is equal to as claimed.

To relate this result to the classical ergodic theorem, we observe

Lemma 3Let be a countable additive group, with a F{\o}lner sequence , and let be a bounded sequence in a normed vector space indexed by . If exists, then exists, and the two limits are equal.

*Proof:* From the F{\o}lner property, we see that for any and any , the averages and differ by at most in norm if is sufficiently large depending on , (and the ). On the other hand, by the existence of the limit , the averages and differ by at most in norm if is sufficiently large depending on (regardless of how large is). The claim follows.

It turns out that this approach can also be used as an alternate way to construct the Gowers–Host-Kra seminorms in ergodic theory, which has the feature that it does not explicitly require any amenability on the group (or separability on the underlying measure space), though, as pointed out to me in comments, even uncountable abelian groups are amenable in the sense of possessing an invariant mean, even if they do not have a F{\o}lner sequence.

Given an arbitrary additive group , define a *-system* to be a probability space (not necessarily separable or standard Borel), together with a collection of invertible, measure-preserving maps, such that is the identity and (modulo null sets) for all . This then gives isomorphisms for by setting . From the above abstract ergodic theorem, we see that

in the strong topology of for any , where is the collection of measurable sets that are essentially -invariant in the sense that modulo null sets for all , and is the conditional expectation of with respect to .

In a similar spirit, we have

Theorem 4 (Convergence of Gowers-Host-Kra seminorms)Let be a -system for some additive group . Let be a natural number, and for every , let , which for simplicity we take to be real-valued. Then the expressionconverges, where we write , and we are using the product direct set on to define the convergence . In particular, for , the limit

converges.

We prove this theorem below the fold. It implies a number of other known descriptions of the Gowers-Host-Kra seminorms , for instance that

for , while from the ergodic theorem we have

This definition also manifestly demonstrates the cube symmetries of the Host-Kra measures on , defined via duality by requiring that

In a subsequent blog post I hope to present a more detailed study of the norm and its relationship with eigenfunctions and the Kronecker factor, without assuming any amenability on or any separability or topological structure on .

We return to the study of the Riemann zeta function , focusing now on the task of upper bounding the size of this function within the critical strip; as seen in Exercise 43 of Notes 2, such upper bounds can lead to zero-free regions for , which in turn lead to improved estimates for the error term in the prime number theorem.

In equation (21) of Notes 2 we obtained the somewhat crude estimates

for any and with and . Setting , we obtained the crude estimate

in this region. In particular, if and then we had . Using the functional equation and the Hadamard three lines lemma, we can improve this to ; see Supplement 3.

Now we seek better upper bounds on . We will reduce the problem to that of bounding certain exponential sums, in the spirit of Exercise 33 of Supplement 3:

Proposition 1Let with and . Thenwhere .

*Proof:* We fix a smooth function with for and for , and allow implied constants to depend on . Let with . From Exercise 33 of Supplement 3, we have

for some sufficiently large absolute constant . By dyadic decomposition, we thus have

We can absorb the first term in the second using the case of the supremum. Writing , where

it thus suffices to show that

for each . But from the fundamental theorem of calculus, the left-hand side can be written as

and the claim then follows from the triangle inequality and a routine calculation.

We are thus interested in getting good bounds on the sum . More generally, we consider normalised exponential sums of the form

where is an interval of length at most for some , and is a smooth function. We will assume smoothness estimates of the form

for some , all , and all , where is the -fold derivative of ; in the case , of interest for the Riemann zeta function, we easily verify that these estimates hold with . (One can consider exponential sums under more general hypotheses than (3), but the hypotheses here are adequate for our needs.) We do not bound the zeroth derivative of directly, but it would not be natural to do so in any event, since the magnitude of the sum (2) is unaffected if one adds an arbitrary constant to .

The trivial bound for (2) is

and we will seek to obtain significant improvements to this bound. Pseudorandomness heuristics predict a bound of for (2) for any if ; this assertion (a special case of the *exponent pair hypothesis*) would have many consequences (for instance, inserting it into Proposition 1 soon yields the Lindelöf hypothesis), but is unfortunately quite far from resolution with known methods. However, we can obtain weaker gains of the form when and depends on . We present two such results here, which perform well for small and large values of respectively:

Theorem 2Let , let be an interval of length at most , and let be a smooth function obeying (3) for all and .

The factor of can be removed by a more careful argument, but we will not need to do so here as we are willing to lose powers of . The estimate (6) is superior to (5) when for large, since (after optimising in ) (5) gives a gain of the form over the trivial bound, while (6) gives . We have not attempted to obtain completely optimal estimates here, settling for a relatively simple presentation that still gives good bounds on , and there are a wide variety of additional exponential sum estimates beyond the ones given here; see Chapter 8 of Iwaniec-Kowalski, or Chapters 3-4 of Montgomery, for further discussion.

We now briefly discuss the strategies of proof of Theorem 2. Both parts of the theorem proceed by treating like a polynomial of degree roughly ; in the case of (ii), this is done explicitly via Taylor expansion, whereas for (i) it is only at the level of analogy. Both parts of the theorem then try to “linearise” the phase to make it a linear function of the summands (actually in part (ii), it is necessary to introduce an additional variable and make the phase a *bilinear* function of the summands). The van der Corput estimate achieves this linearisation by squaring the exponential sum about times, which is why the gain is only exponentially small in . The Vinogradov estimate achieves linearisation by raising the exponential sum to a significantly smaller power – on the order of – by using Hölder’s inequality in combination with the fact that the discrete curve becomes roughly equidistributed in the box after taking the sumset of about copies of this curve. This latter fact has a precise formulation, known as the Vinogradov mean value theorem, and its proof is the most difficult part of the argument, relying on using a “-adic” version of this equidistribution to reduce the claim at a given scale to a smaller scale with , and then proceeding by induction.

One can combine Theorem 2 with Proposition 1 to obtain various bounds on the Riemann zeta function:

Exercise 3 (Subconvexity bound)

- (i) Show that for all . (
Hint:use the case of the Van der Corput estimate.)- (ii) For any , show that as .

Exercise 4Let be such that , and let .

- (i) (Littlewood bound) Use the van der Corput estimate to show that whenever .
- (ii) (Vinogradov-Korobov bound) Use the Vinogradov estimate to show that whenever .

As noted in Exercise 43 of Notes 2, the Vinogradov-Korobov bound leads to the zero-free region , which in turn leads to the prime number theorem with error term

for . If one uses the weaker Littlewood bound instead, one obtains the narrower zero-free region

(which is only slightly wider than the classical zero-free region) and an error term

in the prime number theorem.

Exercise 5 (Vinogradov-Korobov in arithmetic progressions)Let be a non-principal character of modulus .

- (i) (Vinogradov-Korobov bound) Use the Vinogradov estimate to show that whenever and
(

Hint:use the Vinogradov estimate and a change of variables to control for various intervals of length at most and residue classes , in the regime (say). For , do not try to capture any cancellation and just use the triangle inequality instead.)- (ii) Obtain a zero-free region
for , for some (effective) absolute constant .

- (iii) Obtain the prime number theorem in arithmetic progressions with error term
whenever , , is primitive, and depends (ineffectively) on .

In Notes 2, the Riemann zeta function (and more generally, the Dirichlet -functions ) were extended meromorphically into the region in and to the right of the critical strip. This is a sufficient amount of meromorphic continuation for many applications in analytic number theory, such as establishing the prime number theorem and its variants. The zeroes of the zeta function in the critical strip are known as the *non-trivial zeroes* of , and thanks to the truncated explicit formulae developed in Notes 2, they control the asymptotic distribution of the primes (up to small errors).

The function obeys the trivial functional equation

for all in its domain of definition. Indeed, as is real-valued when is real, the function vanishes on the real line and is also meromorphic, and hence vanishes everywhere. Similarly one has the functional equation

From these equations we see that the zeroes of the zeta function are symmetric across the real axis, and the zeroes of are the reflection of the zeroes of across this axis.

It is a remarkable fact that these functions obey an additional, and more non-trivial, functional equation, this time establishing a symmetry across the *critical line* rather than the real axis. One consequence of this symmetry is that the zeta function and -functions may be extended meromorphically to the entire complex plane. For the zeta function, the functional equation was discovered by Riemann, and reads as follows:

Theorem 1 (Functional equation for the Riemann zeta function)The Riemann zeta function extends meromorphically to the entire complex plane, with a simple pole at and no other poles. Furthermore, one has the functional equationfor all complex other than , where is the function

Here , are the complex-analytic extensions of the classical trigionometric functions , and is the Gamma function, whose definition and properties we review below the fold.

The functional equation can be placed in a more symmetric form as follows:

Corollary 2 (Functional equation for the Riemann xi function)The Riemann xi functionis analytic on the entire complex plane (after removing all removable singularities), and obeys the functional equations

In particular, the zeroes of consist precisely of the non-trivial zeroes of , and are symmetric about both the real axis and the critical line. Also, is real-valued on the critical line and on the real axis.

Corollary 2 is an easy consequence of Theorem 1 together with the duplication theorem for the Gamma function, and the fact that has no zeroes to the right of the critical strip, and is left as an exercise to the reader (Exercise 19). The functional equation in Theorem 1 has many proofs, but most of them are related in on way or another to the Poisson summation formula

(Theorem 34 from Supplement 2, at least in the case when is twice continuously differentiable and compactly supported), which can be viewed as a Fourier-analytic link between the coarse-scale distribution of the integers and the fine-scale distribution of the integers. Indeed, there is a quick heuristic proof of the functional equation that comes from formally applying the Poisson summation formula to the function , and noting that the functions and are formally Fourier transforms of each other, up to some Gamma function factors, as well as some trigonometric factors arising from the distinction between the real line and the half-line. Such a heuristic proof can indeed be made rigorous, and we do so below the fold, while also providing Riemann’s two classical proofs of the functional equation.

From the functional equation (and the poles of the Gamma function), one can see that has *trivial zeroes* at the negative even integers , in addition to the non-trivial zeroes in the critical strip. More generally, the following table summarises the zeroes and poles of the various special functions appearing in the functional equation, after they have been meromorphically extended to the entire complex plane, and with zeroes classified as “non-trivial” or “trivial” depending on whether they lie in the critical strip or not. (Exponential functions such as or have no zeroes or poles, and will be ignored in this table; the zeroes and poles of rational functions such as are self-evident and will also not be displayed here.)

Function | Non-trivial zeroes | Trivial zeroes | Poles |

Yes | |||

Yes | |||

No | Even integers | No | |

No | Odd integers | No | |

No | Integers | No | |

No | No | ||

No | No | ||

No | No | ||

No | No | ||

Yes | No | No |

Among other things, this table indicates that the Gamma and trigonometric factors in the functional equation are tied to the trivial zeroes and poles of zeta, but have no direct bearing on the distribution of the non-trivial zeroes, which is the most important feature of the zeta function for the purposes of analytic number theory, beyond the fact that they are symmetric about the real axis and critical line. In particular, the Riemann hypothesis is not going to be resolved just from further analysis of the Gamma function!

The zeta function computes the “global” sum , with ranging all the way from to infinity. However, by some Fourier-analytic (or complex-analytic) manipulation, it is possible to use the zeta function to also control more “localised” sums, such as for some and some smooth compactly supported function . It turns out that the functional equation (3) for the zeta function localises to this context, giving an *approximate functional equation* which roughly speaking takes the form

whenever and ; see Theorem 38 below for a precise formulation of this equation. Unsurprisingly, this form of the functional equation is also very closely related to the Poisson summation formula (8), indeed it is essentially a special case of that formula (or more precisely, of the van der Corput -process). This useful identity relates long smoothed sums of to short smoothed sums of (or vice versa), and can thus be used to shorten exponential sums involving terms such as , which is useful when obtaining some of the more advanced estimates on the Riemann zeta function.

We will give two other basic uses of the functional equation. The first is to get a good count (as opposed to merely an upper bound) on the density of zeroes in the critical strip, establishing the Riemann-von Mangoldt formula that the number of zeroes of imaginary part between and is for large . The other is to obtain untruncated versions of the explicit formula from Notes 2, giving a remarkable exact formula for sums involving the von Mangoldt function in terms of zeroes of the Riemann zeta function. These results are not strictly necessary for most of the material in the rest of the course, but certainly help to clarify the nature of the Riemann zeta function and its relation to the primes.

In view of the material in previous notes, it should not be surprising that there are analogues of all of the above theory for Dirichlet -functions . We will restrict attention to primitive characters , since the -function for imprimitive characters merely differs from the -function of the associated primitive factor by a finite Euler product; indeed, if for some principal whose modulus is coprime to that of , then

(cf. equation (45) of Notes 2).

The main new feature is that the Poisson summation formula needs to be “twisted” by a Dirichlet character , and this boils down to the problem of understanding the finite (additive) Fourier transform of a Dirichlet character. This is achieved by the classical theory of Gauss sums, which we review below the fold. There is one new wrinkle; the value of plays a role in the functional equation. More precisely, we have

Theorem 3 (Functional equation for -functions)Let be a primitive character of modulus with . Then extends to an entire function on the complex plane, withor equivalently

for all , where is equal to in the even case and in the odd case , and

where is the Gauss sum

and , with the convention that the -periodic function is also (by abuse of notation) applied to in the cyclic group .

From this functional equation and (2) we see that, as with the Riemann zeta function, the non-trivial zeroes of (defined as the zeroes within the critical strip are symmetric around the critical line (and, if is real, are also symmetric around the real axis). In addition, acquires trivial zeroes at the negative even integers and at zero if , and at the negative odd integers if . For imprimitive , we see from (9) that also acquires some additional trivial zeroes on the left edge of the critical strip.

There is also a symmetric version of this equation, analogous to Corollary 2:

Corollary 4Let be as above, and setthen is entire with .

For further detail on the functional equation and its implications, I recommend the classic text of Titchmarsh or the text of Davenport.

We will shortly turn to the complex-analytic approach to multiplicative number theory, which relies on the basic properties of complex analytic functions. In this supplement to the main notes, we quickly review the portions of complex analysis that we will be using in this course. We will not attempt a comprehensive review of this subject; for instance, we will completely neglect the conformal geometry or Riemann surface aspect of complex analysis, and we will also avoid using the various boundary convergence theorems for Taylor series or Dirichlet series (the latter type of result is traditionally utilised in multiplicative number theory, but I personally find them a little unintuitive to use, and will instead rely on a slightly different set of complex-analytic tools). We will also focus on the “local” structure of complex analytic functions, in particular adopting the philosophy that such functions behave locally like complex polynomials; the classical “global” theory of entire functions, while traditionally used in the theory of the Riemann zeta function, will be downplayed in these notes. On the other hand, we will play up the relationship between complex analysis and Fourier analysis, as we will incline to using the latter tool over the former in some of the subsequent material. (In the traditional approach to the subject, the Mellin transform is used in place of the Fourier transform, but we will not emphasise the role of the Mellin transform here.)

We begin by recalling the notion of a holomorphic function, which will later be shown to be essentially synonymous with that of a complex analytic function.

Definition 1 (Holomorphic function)Let be an open subset of , and let be a function. If , we say that iscomplex differentiableat if the limitexists, in which case we refer to as the (complex)

derivativeof at . If is differentiable at every point of , and the derivative is continuous, we say that isholomorphicon .

Exercise 2Show that a function is holomorphic if and only if the two-variable function is continuously differentiable on and obeys the Cauchy-Riemann equation

Basic examples of holomorphic functions include complex polynomials

as well as the complex exponential function

which are holomorphic on the entire complex plane (i.e., they are entire functions). The sum or product of two holomorphic functions is again holomorphic; the quotient of two holomorphic functions is holomorphic so long as the denominator is non-zero. Finally, the composition of two holomorphic functions is holomorphic wherever the composition is defined.

- (i) Establish Euler’s formula
for all . (

Hint:it is a bit tricky to do this starting from the trigonometric definitions of sine and cosine; I recommend either using the Taylor series formulations of these functions instead, or alternatively relying on the ordinary differential equations obeyed by sine and cosine.)- (ii) Show that every non-zero complex number has a complex logarithm such that , and that this logarithm is unique up to integer multiples of .
- (iii) Show that there exists a unique principal branch of the complex logarithm in the region , defined by requiring to be a logarithm of with imaginary part between and . Show that this principal branch is holomorphic with derivative .

In real analysis, we have the fundamental theorem of calculus, which asserts that

whenever is a real interval and is a continuously differentiable function. The complex analogue of this fact is that

whenever is a holomorphic function, and is a contour in , by which we mean a piecewise continuously differentiable function, and the contour integral for a continuous function is defined via change of variables as

The complex fundamental theorem of calculus (2) follows easily from the real fundamental theorem and the chain rule.

In real analysis, we have the rather trivial fact that the integral of a continuous function on a closed contour is always zero:

In complex analysis, the analogous fact is significantly more powerful, and is known as Cauchy’s theorem:

Theorem 4 (Cauchy’s theorem)Let be a holomorphic function in a simply connected open set , and let be a closed contour in (thus ). Then .

Exercise 5Use Stokes’ theorem to give a proof of Cauchy’s theorem.

A useful reformulation of Cauchy’s theorem is that of contour shifting: if is a holomorphic function on a open set , and are two contours in an open set with and , such that can be continuously deformed into , then . A basic application of contour shifting is the Cauchy integral formula:

Theorem 6 (Cauchy integral formula)Let be a holomorphic function in a simply connected open set , and let be a closed contour which is simple (thus does not traverse any point more than once, with the exception of the endpoint that is traversed twice), and which encloses a bounded region in the anticlockwise direction. Then for any , one has

*Proof:* Let be a sufficiently small quantity. By contour shifting, one can replace the contour by the sum (concatenation) of three contours: a contour from to , a contour traversing the circle once anticlockwise, and the reversal of the contour that goes from to . The contributions of the contours cancel each other, thus

By a change of variables, the right-hand side can be expanded as

Sending , we obtain the claim.

The Cauchy integral formula has many consequences. Specialising to the case when traverses a circle around , we conclude the mean value property

whenever is holomorphic in a neighbourhood of the disk . In a similar spirit, we have the maximum principle for holomorphic functions:

Lemma 7 (Maximum principle)Let be a simply connected open set, and let be a simple closed contour in enclosing a bounded region anti-clockwise. Let be a holomorphic function. If we have the bound for all on the contour , then we also have the bound for all .

*Proof:* We use an argument of Landau. Fix . From the Cauchy integral formula and the triangle inequality we have the bound

for some constant depending on and . This ostensibly looks like a weaker bound than what we want, but we can miraculously make the constant disappear by the “tensor power trick“. Namely, observe that if is a holomorphic function bounded in magnitude by on , and is a natural number, then is a holomorphic function bounded in magnitude by on . Applying the preceding argument with replaced by we conclude that

and hence

Sending , we obtain the claim.

Another basic application of the integral formula is

Corollary 8Every holomorphic function is complex analytic, thus it has a convergent Taylor series around every point in the domain. In particular, holomorphic functions are smooth, and the derivative of a holomorphic function is again holomorphic.

Conversely, it is easy to see that complex analytic functions are holomorphic. Thus, the terms “complex analytic” and “holomorphic” are synonymous, at least when working on open domains. (On a non-open set , saying that is analytic on is equivalent to asserting that extends to a holomorphic function of an open neighbourhood of .) This is in marked contrast to real analysis, in which a function can be continuously differentiable, or even smooth, without being real analytic.

*Proof:* By translation, we may suppose that . Let be a a contour traversing the circle that is contained in the domain , then by the Cauchy integral formula one has

for all in the disk . As is continuously differentiable (and hence continuous) on , it is bounded. From the geometric series formula

and dominated convergence, we conclude that

with the right-hand side an absolutely convergent series for , and the claim follows.

Exercise 9Establish the generalised Cauchy integral formulaefor any non-negative integer , where is the -fold complex derivative of .

This in turn leads to a converse to Cauchy’s theorem, known as Morera’s theorem:

Corollary 10 (Morera’s theorem)Let be a continuous function on an open set with the property that for all closed contours . Then is holomorphic.

*Proof:* We can of course assume to be non-empty and connected (hence path-connected). Fix a point , and define a “primitive” of by defining , with being any contour from to (this is well defined by hypothesis). By mimicking the proof of the real fundamental theorem of calculus, we see that is holomorphic with , and the claim now follows from Corollary 8.

An important consequence of Morera’s theorem for us is

Corollary 11 (Locally uniform limit of holomorphic functions is holomorphic)Let be holomorphic functions on an open set which converge locally uniformly to a function . Then is also holomorphic on .

*Proof:* By working locally we may assume that is a ball, and in particular simply connected. By Cauchy’s theorem, for all closed contours in . By local uniform convergence, this implies that for all such contours, and the claim then follows from Morera’s theorem.

Now we study the zeroes of complex analytic functions. If a complex analytic function vanishes at a point , but is not identically zero in a neighbourhood of that point, then by Taylor expansion we see that factors in a sufficiently small neighbourhood of as

for some natural number (which we call the *order* or *multiplicity* of the zero at ) and some function that is complex analytic and non-zero near ; this generalises the factor theorem for polynomials. In particular, the zero is isolated if does not vanish identically near . We conclude that if is connected and vanishes on a neighbourhood of some point in , then it must vanish on all of (since the maximal connected neighbourhood of in on which vanishes cannot have any boundary point in ). This implies unique continuation of analytic functions: if two complex analytic functions on agree on a non-empty open set, then they agree everywhere. In particular, if a complex analytic function does not vanish everywhere, then all of its zeroes are isolated, so in particular it has only finitely many zeroes on any given compact set.

Recall that a rational function is a function which is a quotient of two polynomials (at least outside of the set where vanishes). Analogously, let us define a meromorphic function on an open set to be a function defined outside of a discrete subset of (the *singularities* of ), which is locally the quotient of holomorphic functions, in the sense that for every , one has in a neighbourhood of excluding , with holomorphic near and with non-vanishing outside of . If and has a zero of equal or higher order than at , then the singularity is removable and one can extend the meromorphic function holomorphically across (by the holomorphic factor theorem (4)); otherwise, the singularity is non-removable and is known as a *pole*, whose order is equal to the difference between the order of and the order of at . (If one wished, one could extend meromorphic functions to the poles by embedding in the Riemann sphere and mapping each pole to , but we will not do so here. One could also consider non-meromorphic functions with essential singularities at various points, but we will have no need to analyse such singularities in this course.) If the order of a pole or zero is one, we say that it is *simple*; if it is two, we say it is *double*; and so forth.

Exercise 12Show that the space of meromorphic functions on a non-empty open set , quotiented by almost everywhere equivalence, forms a field.

By quotienting two Taylor series, we see that if a meromorphic function has a pole of order at some point , then it has a Laurent expansion

absolutely convergent in a neighbourhood of excluding itself, and with non-zero. The Laurent coefficient has a special significance, and is called the residue of the meromorphic function at , which we will denote as . The importance of this coefficient comes from the following significant generalisation of the Cauchy integral formula, known as the residue theorem:

Exercise 13 (Residue theorem)Let be a meromorphic function on a simply connected domain , and let be a closed contour in enclosing a bounded region anticlockwise, and avoiding all the singularities of . Show thatwhere is summed over all the poles of that lie in .

The residue theorem is particularly useful when applied to logarithmic derivatives of meromorphic functions , because the residue is of a specific form:

Exercise 14Let be a meromorphic function on an open set that does not vanish identically. Show that the only poles of are simple poles (poles of order ), occurring at the poles and zeroes of (after all removable singularities have been removed). Furthermore, the residue of at a pole is an integer, equal to the order of zero of if has a zero at , or equal to negative the order of pole at if has a pole at .

Remark 15The fact that residues of logarithmic derivatives of meromorphic functions are automatically integers is a remarkable feature of the complex analytic approach to multiplicative number theory, which is difficult (though not entirely impossible) to duplicate in other approaches to the subject. Here is a sample application of this integrality, which is challenging to reproduce by non-complex-analytic means: if is meromorphic near , and one has the bound as , then must in fact stay bounded near , because the only integer of magnitude less than is zero.

In the traditional foundations of probability theory, one selects a probability space , and makes a distinction between *deterministic* mathematical objects, which do not depend on the sampled state , and *stochastic* (or *random*) mathematical objects, which do depend (but in a measurable fashion) on the sampled state . For instance, a *deterministic real number* would just be an element , whereas a *stochastic real number* (or *real random variable*) would be a measurable function , where in this post will always be endowed with the Borel -algebra. (For readers familiar with nonstandard analysis, the adjectives “deterministic” and “stochastic” will be used here in a manner analogous to the uses of the adjectives “standard” and “nonstandard” in nonstandard analysis. The analogy is particularly close when comparing with the “cheap nonstandard analysis” discussed in this previous blog post. We will also use “relative to ” as a synonym for “stochastic”.)

Actually, for our purposes we will adopt the philosophy of identifying stochastic objects that agree almost surely, so if one was to be completely precise, we should define a stochastic real number to be an *equivalence class* of measurable functions , up to almost sure equivalence. However, we shall often abuse notation and write simply as .

More generally, given any measurable space , we can talk either about deterministic elements , or about stochastic elements of , that is to say equivalence classes of measurable maps up to almost sure equivalence. We will use to denote the set of all stochastic elements of . (For readers familiar with sheaves, it may helpful for the purposes of this post to think of as the space of measurable global sections of the trivial –bundle over .) Of course every deterministic element of can also be viewed as a stochastic element given by (the equivalence class of) the constant function , thus giving an embedding of into . We do not attempt here to give an interpretation of for sets that are not equipped with a -algebra .

Remark 1In my previous post on the foundations of probability theory, I emphasised the freedom to extend the sample space to a larger sample space whenever one wished to inject additional sources of randomness. This is of course an important freedom to possess (and in the current formalism, is the analogue of the important operation of base change in algebraic geometry), but in this post we will focus on a single fixed sample space , and not consider extensions of this space, so that one only has to consider two types of mathematical objects (deterministic and stochastic), as opposed to having many more such types, one for each potential choice of sample space (with the deterministic objects corresponding to the case when the sample space collapses to a point).

Any (measurable) -ary operation on deterministic mathematical objects then extends to their stochastic counterparts by applying the operation pointwise. For instance, the addition operation on deterministic real numbers extends to an addition operation , by defining the class for to be the equivalence class of the function ; this operation is easily seen to be well-defined. More generally, any measurable -ary deterministic operation between measurable spaces extends to an stochastic operation in the obvious manner.

There is a similar story for -ary relations , although here one has to make a distinction between a deterministic reading of the relation and a stochastic one. Namely, if we are given stochastic objects for , the relation does not necessarily take values in the deterministic Boolean algebra , but only in the stochastic Boolean algebra – thus may be true with some positive probability and also false with some positive probability (with the event that being stochastically true being determined up to null events). Of course, the deterministic Boolean algebra embeds in the stochastic one, so we can talk about a relation being determinstically true or deterministically false, which (due to our identification of stochastic objects that agree almost surely) means that is almost surely true or almost surely false respectively. For instance given two stochastic objects , one can view their equality relation as having a stochastic truth value. This is distinct from the way the equality symbol is used in mathematical logic, which we will now call “equality in the deterministic sense” to reduce confusion. Thus, in the deterministic sense if and only if the stochastic truth value of is equal to , that is to say that for almost all .

Any universal identity for deterministic operations (or universal implication between identities) extends to their stochastic counterparts: for instance, addition is commutative, associative, and cancellative on the space of deterministic reals , and is therefore commutative, associative, and cancellative on stochastic reals as well. However, one has to be more careful when working with mathematical laws that are not expressible as universal identities, or implications between identities. For instance, is an integral domain: if are deterministic reals such that , then one must have or . However, if are *stochastic* reals such that (in the deterministic sense), then it is no longer necessarily the case that (in the deterministic sense) or that (in the deterministic sense); however, it is still true that “ or ” is true in the deterministic sense if one interprets the boolean operator “or” stochastically, thus “ or ” is true for almost all . Another way to properly obtain a stochastic interpretation of the integral domain property of is to rewrite it as

and then make all sets stochastic to obtain the true statement

thus we have to allow the index for which vanishing occurs to also be stochastic, rather than deterministic. (A technical note: when one proves this statement, one has to select in a measurable fashion; for instance, one can choose to equal when , and otherwise (so that in the “tie-breaking” case when and both vanish, one always selects to equal ).)

Similarly, the law of the excluded middle fails when interpreted deterministically, but remains true when interpreted stochastically: if is a stochastic statement, then it is not necessarily the case that is either deterministically true or deterministically false; however the sentence “ or not-” is still deterministically true if the boolean operator “or” is interpreted stochastically rather than deterministically.

To avoid having to keep pointing out which operations are interpreted stochastically and which ones are interpreted deterministically, we will use the following convention: if we assert that a mathematical sentence involving stochastic objects is true, then (unless otherwise specified) we mean that is deterministically true, assuming that all relations used inside are interpreted stochastically. For instance, if are stochastic reals, when we assert that “Exactly one of , , or is true”, then by default it is understood that the relations , , and the boolean operator “exactly one of” are interpreted stochastically, and the assertion is that the sentence is deterministically true.

In the above discussion, the stochastic objects being considered were elements of a deterministic space , such as the reals . However, it can often be convenient to generalise this situation by allowing the ambient space to also be stochastic. For instance, one might wish to consider a stochastic vector inside a stochastic vector space , or a stochastic edge of a stochastic graph . In order to formally describe this situation within the classical framework of measure theory, one needs to place all the ambient spaces inside a measurable space. This can certainly be done in many contexts (e.g. when considering random graphs on a deterministic set of vertices, or if one is willing to work up to equivalence and place the ambient spaces inside a suitable moduli space), but is not completely natural in other contexts. For instance, if one wishes to consider stochastic vector spaces of potentially unbounded dimension (in particular, potentially larger than any given cardinal that one might specify in advance), then the class of all possible vector spaces is so large that it becomes a proper class rather than a set (even if one works up to equivalence), making it problematic to give this class the structure of a measurable space; furthermore, even once one does so, one needs to take additional care to pin down what it would mean for a random vector lying in a random vector space to depend “measurably” on .

Of course, in any reasonable application one can avoid the set theoretic issues at least by various *ad hoc* means, for instance by restricting the dimension of all spaces involved to some fixed cardinal such as . However, the measure-theoretic issues can require some additional effort to resolve properly.

In this post I would like to describe a different way to formalise stochastic spaces, and stochastic elements of these spaces, by viewing the spaces as measure-theoretic analogue of a sheaf, but being over the probability space rather than over a topological space; stochastic objects are then *sections* of such sheaves. Actually, for minor technical reasons it is convenient to work in the slightly more general setting in which the base space is a finite measure space rather than a probability space, thus can take any value in rather than being normalised to equal . This will allow us to easily localise to subevents of without the need for normalisation, even when is a null event (though we caution that the map from deterministic objects ceases to be injective in this latter case). We will however still continue to use probabilistic terminology. despite the lack of normalisation; thus for instance, sets in will be referred to as *events*, the measure of such a set will be referred to as the *probability* (which is now permitted to exceed in some cases), and an event whose complement is a null event shall be said to hold *almost surely*. It is in fact likely that almost all of the theory below extends to base spaces which are -finite rather than finite (for instance, by damping the measure to become finite, without introducing any further null events), although we will not pursue this further generalisation here.

The approach taken in this post is “topos-theoretic” in nature (although we will not use the language of topoi explicitly here), and is well suited to a “pointless” or “point-free” approach to probability theory, in which the role of the stochastic state is suppressed as much as possible; instead, one strives to always adopt a “relative point of view”, with all objects under consideration being viewed as stochastic objects relative to the underlying base space . In this perspective, the stochastic version of a set is as follows.

Definition 1 (Stochastic set)Unless otherwise specified, we assume that we are given a fixed finite measure space (which we refer to as thebase space). Astochastic set(relative to ) is a tuple consisting of the following objects:

- A set assigned to each event ; and
- A
restriction mapfrom to to each pair of nested events . (Strictly speaking, one should indicate the dependence on in the notation for the restriction map, e.g. using instead of , but we will abuse notation by omitting the dependence.)We refer to elements of as

local stochastic elementsof the stochastic set , localised to the event , and elements of asglobal stochastic elements(or simplyelements) of the stochastic set. (In the language of sheaves, one would use “sections” instead of “elements” here, but I prefer to use the latter terminology here, for compatibility with conventional probabilistic notation, where for instance measurable maps from to are referred to as real random variables, rather than sections of the reals.)Furthermore, we impose the following axioms:

- (Category) The map from to is the identity map, and if are events in , then for all .
- (Null events trivial) If is a null event, then the set is a singleton set. (In particular, is always a singleton set; this is analogous to the convention that for any number .)
- (Countable gluing) Suppose that for each natural number , one has an event and an element such that for all . Then there exists a unique such that for all .
If is an event in , we define the

localisationof the stochastic set to to be the stochastic setrelative to . (Note that there is no need to renormalise the measure on , as we are not demanding that our base space have total measure .)

The following fact is useful for actually verifying that a given object indeed has the structure of a stochastic set:

Exercise 1Show that to verify the countable gluing axiom of a stochastic set, it suffices to do so under the additional hypothesis that the events are disjoint. (Note that this is quite different from the situation with sheaves over a topological space, in which the analogous gluing axiom is often trivial in the disjoint case but has non-trivial content in the overlapping case. This is ultimately because a -algebra is closed under all Boolean operations, whereas a topology is only closed under union and intersection.)

Let us illustrate the concept of a stochastic set with some examples.

Example 1 (Discrete case)A simple case arises when is a discrete space which is at most countable. If we assign a set to each , with a singleton if . One then sets , with the obvious restriction maps, giving rise to a stochastic set . (Thus, a local element of can be viewed as a map on that takes values in for each .) Conversely, it is not difficult to see that any stochastic set over an at most countable discrete probability space is of this form up to isomorphism. In this case, one can think of as a bundle of sets over each point (of positive probability) in the base space . One can extend this bundle interpretation of stochastic sets to reasonably nice sample spaces (such as standard Borel spaces) and similarly reasonable ; however, I would like to avoid this interpretation in the formalism below in order to be able to easily work in settings in which and are very “large” (e.g. not separable in any reasonable sense). Note that we permit some of the to be empty, thus it can be possible for to be empty whilst for some strict subevents of to be non-empty. (This is analogous to how it is possible for a sheaf to have local sections but no global sections.) As such, the space of global elements does not completely determine the stochastic set ; one sometimes needs to localise to an event in order to see the full structure of such a set. Thus it is important to distinguish between a stochastic set and its space of global elements. (As such, it is a slight abuse of the axiom of extensionality to refer to global elements of simply as “elements”, but hopefully this should not cause too much confusion.)

Example 2 (Measurable spaces as stochastic sets)Returning now to a general base space , any (deterministic) measurable space gives rise to a stochastic set , with being defined as in previous discussion as the measurable functions from to modulo almost everywhere equivalence (in particular, a singleton set when is null), with the usual restriction maps. The constraint of measurability on the maps , together with the quotienting by almost sure equivalence, means that is now more complicated than a plain Cartesian product of fibres, but this still serves as a useful first approximation to what is for the purposes of developing intuition. Indeed, the measurability constraint is so weak (as compared for instance to topological or smooth constraints in other contexts, such as sheaves of continuous or smooth sections of bundles) that the intuition of essentially independent fibres is quite an accurate one, at least if one avoids consideration of an uncountable number of objects simultaneously.

Example 3 (Extended Hilbert modules)This example is the one that motivated this post for me. Suppose that one has an extension of the base space , thus we have a measurable factor map such that the pushforward of the measure by is equal to . Then we have a conditional expectation operator , defined as the adjoint of the pullback map . As is well known, the conditional expectation operator also extends to a contraction ; by monotone convergence we may also extend to a map from measurable functions from to the extended non-negative reals , to measurable functions from to . We then define the “extended Hilbert module” to be the space of functions with finite almost everywhere. This is an extended version of the Hilbert module , which is defined similarly except that is required to lie in ; this is a Hilbert module over which is of particular importance in the Furstenberg-Zimmer structure theory of measure-preserving systems. We can then define the stochastic set by settingwith the obvious restriction maps. In the case that are standard Borel spaces, one can

disintegrateas an integral of probability measures (supported in the fibre ), in which case this stochastic set can be viewed as having fibres (though if is not discrete, there are still some measurability conditions in on the local and global elements that need to be imposed). However, I am interested in the case when are not standard Borel spaces (in fact, I will take them to be algebraic probability spaces, as defined in this previous post), in which case disintegrations are not available. However, it appears that the stochastic analysis developed in this blog post can serve as a substitute for the tool of disintegration in this context.

We make the remark that if is a stochastic set and are events that are equivalent up to null events, then one can identify with (through their common restriction to , with the restriction maps now being bijections). As such, the notion of a stochastic set does not require the full structure of a concrete probability space ; one could also have defined the notion using only the abstract -algebra consisting of modulo null events as the base space, or equivalently one could define stochastic sets over the algebraic probability spaces defined in this previous post. However, we will stick with the classical formalism of concrete probability spaces here so as to keep the notation reasonably familiar.

As a corollary of the above observation, we see that if the base space has total measure , then all stochastic sets are trivial (they are just points).

Exercise 2If is a stochastic set, show that there exists an event with the property that for any event , is non-empty if and only if is contained in modulo null events. (In particular, is unique up to null events.)Hint:consider the numbers for ranging over all events with non-empty, and form a maximising sequence for these numbers. Then use all three axioms of a stochastic set.

One can now start take many of the fundamental objects, operations, and results in set theory (and, hence, in most other categories of mathematics) and establish analogues relative to a finite measure space. Implicitly, what we will be doing in the next few paragraphs is endowing the category of stochastic sets with the structure of an elementary topos. However, to keep things reasonably concrete, we will not explicitly emphasise the topos-theoretic formalism here, although it is certainly lurking in the background.

Firstly, we define a *stochastic function* between two stochastic sets to be a collection of maps for each which form a natural transformation in the sense that for all and nested events . In the case when is discrete and at most countable (and after deleting all null points), a stochastic function is nothing more than a collection of functions for each , with the function then being a direct sum of the factor functions :

Thus (in the discrete, at most countable setting, at least) stochastic functions do not mix together information from different states in a sample space; the value of at depends only on the value of at . The situation is a bit more subtle for continuous probability spaces, due to the identification of stochastic objects that agree almost surely, nevertheness it is still good intuition to think of stochastic functions as essentially being “pointwise” or “local” in nature.

One can now form the stochastic set of functions from to , by setting for any event to be the set of local stochastic functions of the localisations of to ; this is a stochastic set if we use the obvious restriction maps. In the case when is discrete and at most countable, the fibre at a point of positive measure is simply the set of functions from to .

In a similar spirit, we say that one stochastic set is a (stochastic) subset of another , and write , if we have a stochastic inclusion map, thus for all events , with the restriction maps being compatible. We can then define the power set of a stochastic set by setting for any event to be the set of all stochastic subsets of relative to ; it is easy to see that is a stochastic set with the obvious restriction maps (one can also identify with in the obvious fashion). Again, when is discrete and at most countable, the fibre of at a point of positive measure is simply the deterministic power set .

Note that if is a stochastic function and is a stochastic subset of , then the inverse image , defined by setting for any event to be the set of those with , is a stochastic subset of . In particular, given a -ary relation , the inverse image is a stochastic subset of , which by abuse of notation we denote as

In a similar spirit, if is a stochastic subset of and is a stochastic function, we can define the image by setting to be the set of those with ; one easily verifies that this is a stochastic subset of .

Remark 2One should caution that in the definition of the subset relation , it is important that for all events , not just the global event ; in particular, just because a stochastic set has no global sections, does not mean that it is contained in the stochastic empty set .

Now we discuss Boolean operations on stochastic subsets of a given stochastic set . Given two stochastic subsets of , the stochastic intersection is defined by setting to be the set of that lie in both and :

This is easily verified to again be a stochastic subset of . More generally one may define stochastic countable intersections for any sequence of stochastic subsets of . One *could* extend this definition to uncountable families if one wished, but I would advise against it, because some of the usual laws of Boolean algebra (e.g. the de Morgan laws) may break down in this setting.

Stochastic unions are a bit more subtle. The set should not be defined to simply be the union of and , as this would not respect the gluing axiom. Instead, we define to be the set of all such that one can cover by measurable subevents such that for ; then may be verified to be a stochastic subset of . Thus for instance is the stochastic union of and . Similarly for countable unions of stochastic subsets of , although for uncountable unions are extremely problematic (they are disliked by both the measure theory and the countable gluing axiom) and will not be defined here. Finally, the stochastic difference set is defined as the set of all in such that for any subevent of of positive probability. One may verify that in the case when is discrete and at most countable, these Boolean operations correspond to the classical Boolean operations applied separately to each fibre of the relevant sets . We also leave as an exercise to the reader to verify the usual laws of Boolean arithmetic, e.g. the de Morgan laws, provided that one works with at most countable unions and intersections.

One can also consider a stochastic finite union in which the number of sets in the union is itself stochastic. More precisely, let be a stochastic set, let be a stochastic natural number, and let be a stochastic function from the stochastic set (defined by setting )) to the stochastic power set . Here we are considering to be a natural number, to allow for unions that are possibly empty, with used for the positive natural numbers. We also write for the stochastic function . Then we can define the stochastic union by setting for an event to be the set of local elements with the property that there exists a covering of by measurable subevents for , such that one has and . One can verify that is a stochastic set (with the obvious restriction maps). Again, in the model case when is discrete and at most countable, the fibre is what one would expect it to be, namely .

The Cartesian product of two stochastic sets may be defined by setting for all events , with the obvious restriction maps; this is easily seen to be another stochastic set. This lets one define the concept of a -ary operation from stochastic sets to another stochastic set , or a -ary relation . In particular, given for , the relation may be deterministically true, deterministically false, or have some other stochastic truth value.

Remark 3In the degenerate case when is null, stochastic logic becomes a bit weird: all stochastic statements are deterministically true, as are their stochastic negations, since every event in (even the empty set) now holds with full probability. Among other pathologies, the empty set now has a global element over (this is analogous to the notorious convention ), and any two deterministic objects become equal over : .

The following simple observation is crucial to subsequent discussion. If is a sequence taking values in the global elements of a stochastic space , then we may also define global elements for *stochastic* indices as well, by appealing to the countable gluing axiom to glue together restricted to the set for each deterministic natural number to form . With this definition, the map is a stochastic function from to ; indeed, this creates a one-to-one correspondence between external sequences (maps from to ) and stochastic sequences (stochastic functions from to ). Similarly with replaced by any other at most countable set. This observation will be important in allowing many deterministic arguments involving sequences will be able to be carried over to the stochastic setting.

We now specialise from the extremely broad discipline of set theory to the more focused discipline of real analysis. There are two fundamental axioms that underlie real analysis (and in particular distinguishes it from real *algebra*). The first is the Archimedean property, which we phrase in the “no infinitesimal” formulation as follows:

Proposition 2 (Archimedean property)Let be such that for all positive natural numbers . Then .

The other is the least upper bound axiom:

Proposition 3 (Least upper bound axiom)Let be a non-empty subset of which has an upper bound , thus for all . Then there exists a unique real number with the following properties:

- for all .
- For any real , there exists such that .
- .
Furthermore, does not depend on the choice of .

The Archimedean property extends easily to the stochastic setting:

Proposition 4 (Stochastic Archimedean property)Let be such that for all deterministic natural numbers . Then .

Remark 4Here, incidentally, is one place in which this stochastic formalism deviates from the nonstandard analysis formalism, as the latter certainly permits the existence of infinitesimal elements. On the other hand, we caution that stochastic real numbers are permitted to be unbounded, so that formulation of Archimedean property is not valid in the stochastic setting.

The proof is easy and is left to the reader. The least upper bound axiom also extends nicely to the stochastic setting, but the proof requires more work (in particular, our argument uses the monotone convergence theorem):

Theorem 5 (Stochastic least upper bound axiom)Let be a stochastic subset of which has a global upper bound , thus for all , and is globally non-empty in the sense that there is at least one global element . Then there exists a unique stochastic real number with the following properties:

- for all .
- For any stochastic real , there exists such that .
- .
Furthermore, does not depend on the choice of .

For future reference, we note that the same result holds with replaced by throughout, since the latter may be embedded in the former, for instance by mapping to and to . In applications, the above theorem serves as a reasonable substitute for the countable axiom of choice, which does not appear to hold in unrestricted generality relative to a measure space; in particular, it can be used to generate various extremising sequences for stochastic functionals on various stochastic function spaces.

*Proof:* Uniqueness is clear (using the Archimedean property), as well as the independence on , so we turn to existence. By using an order-preserving map from to (e.g. ) we may assume that is a subset of , and that .

We observe that is a lattice: if , then and also lie in . Indeed, may be formed by appealing to the countable gluing axiom to glue (restricted the set ) with (restricted to the set ), and similarly for . (Here we use the fact that relations such as are Borel measurable on .)

Let denote the deterministic quantity

then (by Proposition 3!) is well-defined; here we use the hypothesis that is finite. Thus we may find a sequence of elements of such that

Using the lattice property, we may assume that the are non-decreasing: whenever . If we then define (after choosing measurable representatives of each equivalence class ), then is a stochastic real with .

If , then , and so

From this and (1) we conclude that

From monotone convergence, we conclude that

and so , as required.

Now let be a stochastic real. After choosing measurable representatives of each relevant equivalence class, we see that for almost every , we can find a natural number with . If we choose to be the first such positive natural number when it exists, and (say) otherwise, then is a stochastic positive natural number and . The claim follows.

Remark 5One can abstract away the role of the measure here, leaving only the ideal of null sets. The property that the measure is finite is then replaced by the more general property that given any non-empty family of measurable sets, there is an at most countable union of sets in that family that is an upper bound modulo null sets for all elements in that faily.

Using Proposition 4 and Theorem 5, one can then revisit many of the other foundational results of deterministic real analysis, and develop stochastic analogues; we give some examples of this below the fold (focusing on the Heine-Borel theorem and a case of the spectral theorem). As an application of this formalism, we revisit some of the Furstenberg-Zimmer structural theory of measure-preserving systems, particularly that of relatively compact and relatively weakly mixing systems, and interpret them in this framework, basically as stochastic versions of compact and weakly mixing systems (though with the caveat that the shift map is allowed to act non-trivially on the underlying probability space). As this formalism is “point-free”, in that it avoids explicit use of fibres and disintegrations, it will be well suited for generalising this structure theory to settings in which the underlying probability spaces are not standard Borel, and the underlying groups are uncountable; I hope to discuss such generalisations in future blog posts.

Remark 6Roughly speaking, stochastic real analysis can be viewed as a restricted subset of classical real analysis in which all operations have to be “measurable” with respect to the base space. In particular, indiscriminate application of the axiom of choice is not permitted, and one should largely restrict oneself to performing countable unions and intersections rather than arbitrary unions or intersections. Presumably one can formalise this intuition with a suitable “countable transfer principle”, but I was not able to formulate a clean and general principle of this sort, instead verifying various assertions about stochastic objects by hand rather than by direct transfer from the deterministic setting. However, it would be desirable to have such a principle, since otherwise one is faced with the tedious task of redoing all the foundations of real analysis (or whatever other base theory of mathematics one is going to be working in) in the stochastic setting by carefully repeating all the arguments.More generally, topos theory is a good formalism for capturing precisely the informal idea of performing mathematics with certain operations, such as the axiom of choice, the law of the excluded middle, or arbitrary unions and intersections, being somehow “prohibited” or otherwise “restricted”.

## Recent Comments