You are currently browsing the category archive for the ‘math.CA’ category.

Here’s a cute identity I discovered by accident recently. Observe that

\displaystyle  \frac{d}{dx} (1+x^2)^{0/2} = 0

\displaystyle  \frac{d^2}{dx^2} (1+x^2)^{1/2} = \frac{1}{(1+x^2)^{3/2}}

\displaystyle  \frac{d^3}{dx^3} (1+x^2)^{2/2} = 0

\displaystyle  \frac{d^4}{dx^4} (1+x^2)^{3/2} = \frac{9}{(1+x^2)^{5/2}}

\displaystyle  \frac{d^5}{dx^5} (1+x^2)^{4/2} = 0

\displaystyle  \frac{d^6}{dx^6} (1+x^2)^{5/2} = \frac{225}{(1+x^2)^{7/2}}

and so one can conjecture that one has

\displaystyle  \frac{d^{k+1}}{dx^{k+1}} (1+x^2)^{k/2} = 0

when k is even, and

\displaystyle  \frac{d^{k+1}}{dx^{k+1}} (1+x^2)^{k/2} = \frac{(1 \times 3 \times \dots \times k)^2}{(1+x^2)^{(k+2)/2}}

when k is odd. This is obvious in the even case since (1+x^2)^{k/2} is a polynomial of degree k, but I struggled for a while with the odd case before finding a slick three-line proof. (I was first trying to prove the weaker statement that \frac{d^{k+1}}{dx^{k+1}} (1+x^2)^{k/2} was non-negative, but for some strange reason I was only able to establish this by working out the derivative exactly, rather than by using more analytic methods, such as convexity arguments.) I thought other readers might like the challenge (and also I’d like to see some other proofs), so rather than post my own proof immediately, I’ll see if anyone would like to supply their own proofs or thoughts in the comments. Also I am curious to know if this identity is connected to any other existing piece of mathematics.

I’ve just uploaded to the arXiv my paper “Cancellation for the multilinear Hilbert transform“, submitted to Collectanea Mathematica. This paper uses methods from additive combinatorics (and more specifically, the arithmetic regularity and counting lemmas from this paper of Ben Green and myself) to obtain a slight amount of progress towards the open problem of obtaining {L^p} bounds for the trilinear and higher Hilbert transforms (as discussed in this previous blog post). For instance, the trilinear Hilbert transform

\displaystyle  H_3( f_1, f_2, f_3 )(x) := p.v. \int_{\bf R} f_1(x+t) f_2(x+2t) f_3(x+3t)\ \frac{dt}{t}

is not known to be bounded for any {L^{p_1}({\bf R}) \times L^{p_2}({\bf R}) \times L^{p_3}({\bf R})} to {L^p({\bf R})}, although it is conjectured to do so when {1/p =1/p_1 +1/p_2+1/p_3} and {1 < p_1,p_2,p_3,p < \infty}. (For {p} well below {1}, one can use additive combinatorics constructions to demonstrate unboundedness; see this paper of Demeter.) One can approach this problem by considering the truncated trilinear Hilbert transforms

\displaystyle  H_{3,r,R}( f_1, f_2, f_3 )(x) := \int_{r \leq |t| \leq R} f_1(x+t) f_2(x+2t) f_3(x+3t)\ \frac{dt}{t}

for {0 < r < R}. It is not difficult to show that the boundedness of {H_3} is equivalent to the boundedness of {H_{3,r,R}} with bounds that are uniform in {R} and {r}. On the other hand, from Minkowski’s inequality and Hölder’s inequality one can easily obtain the non-uniform bound of {2 \log \frac{R}{r}} for {H_{3,r,R}}. The main result of this paper is a slight improvement of this trivial bound to {o( \log \frac{R}{r})} as {R/r \rightarrow \infty}. Roughly speaking, the way this gain is established is as follows. First there are some standard time-frequency type reductions to reduce to the task of obtaining some non-trivial cancellation on a single “tree”. Using a “generalised von Neumann theorem”, we show that such cancellation will happen if (a discretised version of) one or more of the functions {f_1,f_2,f_3} (or a dual function {f_0} that it is convenient to test against) is small in the Gowers {U^3} norm. However, the arithmetic regularity lemma alluded to earlier allows one to represent an arbitrary function {f_i}, up to a small error, as the sum of such a “Gowers uniform” function, plus a structured function (or more precisely, an irrational virtual nilsequence). This effectively reduces the problem to that of establishing some cancellation in a single tree in the case when all functions {f_0,f_1,f_2,f_3} involved are irrational virtual nilsequences. At this point, the contribution of each component of the tree can be estimated using the “counting lemma” from my paper with Ben. The main term in the asymptotics is a certain integral over a nilmanifold, but because the kernel {\frac{dt}{t}} in the trilinear Hilbert transform is odd, it turns out that this integral vanishes, giving the required cancellation.

The same argument works for higher order Hilbert transforms (and one can also replace the coefficients in these transforms with other rational constants). However, because the quantitative bounds in the arithmetic regularity and counting lemmas are so poor, it does not seem likely that one can use these methods to remove the logarithmic growth in {R/r} entirely, and some additional ideas will be needed to resolve the full conjecture.

I’ve just uploaded to the arXiv my paper “Failure of the {L^1} pointwise and maximal ergodic theorems for the free group“, submitted to Forum of Mathematics, Sigma. This paper concerns a variant of the pointwise ergodic theorem of Birkhoff, which asserts that if one has a measure-preserving shift map {T: X \rightarrow X} on a probability space {X = (X,\mu)}, then for any {f \in L^1(X)}, the averages {\frac{1}{N} \sum_{n=1}^N f \circ T^{-n}} converge pointwise almost everywhere. (In the important case when the shift map {T} is ergodic, the pointwise limit is simply the mean {\int_X f\ d\mu} of the original function {f}.)

The pointwise ergodic theorem can be extended to measure-preserving actions of other amenable groups, if one uses a suitably “tempered” Folner sequence of averages; see this paper of Lindenstrauss for more details. (I also wrote up some notes on that paper here, back in 2006 before I had started this blog.) But the arguments used to handle the amenable case break down completely for non-amenable groups, and in particular for the free non-abelian group {F_2} on two generators.

Nevo and Stein studied this problem and obtained a number of pointwise ergodic theorems for {F_2}-actions {(T_g)_{g \in F_2}} on probability spaces {(X,\mu)}. For instance, for the spherical averaging operators

\displaystyle  {\mathcal A}_n f := \frac{1}{4 \times 3^{n-1}} \sum_{g \in F_2: |g| = n} f \circ T_g^{-1}

(where {|g|} denotes the length of the reduced word that forms {g}), they showed that {{\mathcal A}_{2n} f} converged pointwise almost everywhere provided that {f} was in {L^p(X)} for some {p>1}. (The need to restrict to spheres of even radius can be seen by considering the action of {F_2} on the two-element set {\{0,1\}} in which both generators of {F_2} act by interchanging the elements, in which case {{\mathcal A}_n} is determined by the parity of {n}.) This result was reproven with a different and simpler proof by Bufetov, who also managed to relax the condition {f \in L^p(X)} to the weaker condition {f \in L \log L(X)}.

The question remained open as to whether the pointwise ergodic theorem for {F_2}-actions held if one only assumed that {f} was in {L^1(X)}. Nevo and Stein were able to establish this for the Cesáro averages {\frac{1}{N} \sum_{n=1}^N {\mathcal A}_n}, but not for {{\mathcal A}_n} itself. About six years ago, Assaf Naor and I tried our hand at this problem, and was able to show an associated maximal inequality on {\ell^1(F_2)}, but due to the non-amenability of {F_2}, this inequality did not transfer to {L^1(X)} and did not have any direct impact on this question, despite a fair amount of effort on our part to attack it.

Inspired by some recent conversations with Lewis Bowen, I returned to this problem. This time around, I tried to construct a counterexample to the {L^1} pointwise ergodic theorem – something Assaf and I had not seriously attempted to do (perhaps due to being a bit too enamoured of our {\ell^1(F_2)} maximal inequality). I knew of an existing counterexample of Ornstein regarding a failure of an {L^1} ergodic theorem for iterates {P^n} of a self-adjoint Markov operator – in fact, I had written some notes on this example back in 2007. Upon revisiting my notes, I soon discovered that the Ornstein construction was adaptable to the {F_2} setting, thus settling the problem in the negative:

Theorem 1 (Failure of {L^1} pointwise ergodic theorem) There exists a measure-preserving {F_2}-action on a probability space {X} and a non-negative function {f \in L^1(X)} such that {\sup_n {\mathcal A}_{2n} f(x) = +\infty} for almost every {x}.

To describe the proof of this theorem, let me first briefly sketch the main ideas of Ornstein’s construction, which gave an example of a self-adjoint Markov operator {P} on a probability space {X} and a non-negative {f \in L^1(X)} such that {\sup_n P^n f(x) = +\infty} for almost every {x}. By some standard manipulations, it suffices to show that for any given {\alpha > 0} and {\varepsilon>0}, there exists a self-adjoint Markov operator {P} on a probability space {X} and a non-negative {f \in L^1(X)} with {\|f\|_{L^1(X)} \leq \alpha}, such that {\sup_n P^n f \geq 1-\varepsilon} on a set of measure at least {1-\varepsilon}. Actually, it will be convenient to replace the Markov chain {(P^n f)_{n \geq 0}} with an ancient Markov chain {(f_n)_{n \in {\bf Z}}} – that is to say, a sequence of non-negative functions {f_n} for both positive and negative {f}, such that {f_{n+1} = P f_n} for all {n \in {\bf Z}}. The purpose of requiring the Markov chain to be ancient (that is, to extend infinitely far back in time) is to allow for the Markov chain to be shifted arbitrarily in time, which is key to Ornstein’s construction. (Technically, Ornstein’s original argument only uses functions that go back to a large negative time, rather than being infinitely ancient, but I will gloss over this point for sake of discussion, as it turns out that the {F_2} version of the argument can be run using infinitely ancient chains.)

For any {\alpha>0}, let {P(\alpha)} denote the claim that for any {\varepsilon>0}, there exists an ancient Markov chain {(f_n)_{n \in {\bf Z}}} with {\|f_n\|_{L^1(X)} = \alpha} such that {\sup_{n \in {\bf Z}} f_n \geq 1-\varepsilon} on a set of measure at least {1-\varepsilon}. Clearly {P(1)} holds since we can just take {f_n=1} for all {n}. Our objective is to show that {P(\alpha)} holds for arbitrarily small {\alpha}. The heart of Ornstein’s argument is then the implication

\displaystyle  P(\alpha) \implies P( \alpha (1 - \frac{\alpha}{4}) ) \ \ \ \ \ (1)

for any {0 < \alpha \leq 1}, which upon iteration quickly gives the desired claim.

Let’s see informally how (1) works. By hypothesis, and ignoring epsilons, we can find an ancient Markov chain {(f_n)_{n \in {\bf Z}}} on some probability space {X} of total mass {\|f_n\|_{L^1(X)} = \alpha}, such that {\sup_n f_n} attains the value of {1} or greater almost everywhere. Assuming that the Markov process is irreducible, the {f_n} will eventually converge as {n \rightarrow \infty} to the constant value of {\|f_n\|_{L^1(X)}}, in particular its final state will essentially stay above {\alpha} (up to small errors).

Now suppose we duplicate the Markov process by replacing {X} with a double copy {X \times \{1,2\}} (giving {\{1,2\}} the uniform probability measure), and using the disjoint sum of the Markov operators on {X \times \{1\}} and {X \times \{2\}} as the propagator, so that there is no interaction between the two components of this new system. Then the functions {f'_n(x,i) := f_n(x) 1_{i=1}} form an ancient Markov chain of mass at most {\alpha/2} that lives solely in the first half {X \times \{1\}} of this copy, and {\sup_n f'_n} attains the value of {1} or greater on almost all of the first half {X \times \{1\}}, but is zero on the second half. The final state of {f'_n} will be to stay above {\alpha} in the first half {X \times \{1\}}, but be zero on the second half.

Now we modify the above example by allowing an infinitesimal amount of interaction between the two halves {X \times \{1\}}, {X \times \{2\}} of the system (I mentally think of {X \times \{1\}} and {X \times \{2\}} as two identical boxes that a particle can bounce around in, and now we wish to connect the boxes by a tiny tube). The precise way in which this interaction is inserted is not terribly important so long as the new Markov process is irreducible. Once one does so, then the ancient Markov chain {(f'_n)_{n \in {\bf Z}}} in the previous example gets replaced by a slightly different ancient Markov chain {(f''_n)_{n \in {\bf Z}}} which is more or less identical with {f'_n} for negative times {n}, or for bounded positive times {n}, but for very large values of {n} the final state is now constant across the entire state space {X \times \{1,2\}}, and will stay above {\alpha/2} on this space.

Finally, we consider an ancient Markov chain {F_n} which is basically of the form

\displaystyle  F_n(x,i) \approx f''_n(x,i) + (1 - \frac{\alpha}{2}) f_{n-M}(x) 1_{i=2}

for some large parameter {M} and for all {n \leq M} (the approximation becomes increasingly inaccurate for {n} much larger than {M}, but never mind this for now). This is basically two copies of the original Markov process in separate, barely interacting state spaces {X \times \{1\}, X \times \{2\}}, but with the second copy delayed by a large time delay {M}, and also attenuated in amplitude by a factor of {1-\frac{\alpha}{2}}. The total mass of this process is now {\frac{\alpha}{2} + \frac{\alpha}{2} (1 -\frac{\alpha}{2}) = \alpha (1 - \alpha/4)}. Because of the {f''_n} component of {F_n}, we see that {\sup_n F_n} basically attains the value of {1} or greater on the first half {X \times \{1\}}. On the second half {X \times \{2\}}, we work with times {n} close to {M}. If {M} is large enough, {f''_n} would have averaged out to about {\alpha/2} at such times, but the {(1 - \frac{\alpha}{2}) f_{n-M}(x)} component can get as large as {1-\alpha/2} here. Summing (and continuing to ignore various epsilon losses), we see that {\sup_n F_n} can get as large as {1} on almost all of the second half of {X \times \{2\}}. This concludes the rough sketch of how one establishes the implication (1).

It was observed by Bufetov that the spherical averages {{\mathcal A}_n} for a free group action can be lifted up to become powers {P^n} of a Markov operator, basically by randomly assigning a “velocity vector” {s \in \{a,b,a^{-1},b^{-1}\}} to one’s base point {x} and then applying the Markov process that moves {x} along that velocity vector (and then randomly changing the velocity vector at each time step to the “reduced word” condition that the velocity never flips from {s} to {s^{-1}}). Thus the spherical average problem has a Markov operator interpretation, which opens the door to adapting the Ornstein construction to the setting of {F_2} systems. This turns out to be doable after a certain amount of technical artifice; the main thing is to work with {F_2}-measure preserving systems that admit ancient Markov chains that are initially supported in a very small region in the “interior” of the state space, so that one can couple such systems to each other “at the boundary” in the fashion needed to establish the analogue of (1) without disrupting the ancient dynamics of such chains. The initial such system (used to establish the base case {P(1)}) comes from basically considering the action of {F_2} on a (suitably renormalised) “infinitely large ball” in the Cayley graph, after suitably gluing together the boundary of this ball to complete the action. The ancient Markov chain associated to this system starts at the centre of this infinitely large ball at infinite negative time {n=-\infty}, and only reaches the boundary of this ball at the time {n=0}.

The lonely runner conjecture is the following open problem:

Conjecture 1 Suppose one has {n \geq 1} runners on the unit circle {{\bf R}/{\bf Z}}, all starting at the origin and moving at different speeds. Then for each runner, there is at least one time {t} for which that runner is “lonely” in the sense that it is separated by a distance at least {1/n} from all other runners.

One can normalise the speed of the lonely runner to be zero, at which point the conjecture can be reformulated (after replacing {n} by {n+1}) as follows:

Conjecture 2 Let {v_1,\dots,v_n} be non-zero real numbers for some {n \geq 1}. Then there exists a real number {t} such that the numbers {tv_1,\dots,tv_n} are all a distance at least {\frac{1}{n+1}} from the integers, thus {\|tv_1\|_{{\bf R}/{\bf Z}},\dots,\|tv_n\|_{{\bf R}/{\bf Z}} \geq \frac{1}{n+1}} where {\|x\|_{{\bf R}/{\bf Z}}} denotes the distance of {x} to the nearest integer.

This conjecture has been proven for {n \leq 7}, but remains open for larger {n}. The bound {\frac{1}{n+1}} is optimal, as can be seen by looking at the case {v_i=i} and applying the Dirichlet approximation theorem. Note that for each non-zero {v}, the set {\{ t \in {\bf R}: \|vt\|_{{\bf R}/{\bf Z}} \leq r \}} has (Banach) density {2r} for any {0 < r < 1/2}, and from this and the union bound we can easily find {t \in {\bf R}} for which

\displaystyle \|tv_1\|_{{\bf R}/{\bf Z}},\dots,\|tv_n\|_{{\bf R}/{\bf Z}} \geq \frac{1}{2n}-\varepsilon

for any {\varepsilon>0}, but it has proven to be quite challenging to remove the factor of {2} to increase {\frac{1}{2n}} to {\frac{1}{n+1}}. (As far as I know, even improving {\frac{1}{2n}} to {\frac{1+c}{2n}} for some absolute constant {c>0} and sufficiently large {n} remains open.)

The speeds {v_1,\dots,v_n} in the above conjecture are arbitrary non-zero reals, but it has been known for some time that one can reduce without loss of generality to the case when the {v_1,\dots,v_n} are rationals, or equivalently (by scaling) to the case where they are integers; see e.g. Section 4 of this paper of Bohman, Holzman, and Kleitman.

In this post I would like to remark on a slight refinement of this reduction, in which the speeds {v_1,\dots,v_n} are integers of bounded size, where the bound depends on {n}. More precisely:

Proposition 3 In order to prove the lonely runner conjecture, it suffices to do so under the additional assumption that the {v_1,\dots,v_n} are integers of size at most {n^{Cn^2}}, where {C} is an (explicitly computable) absolute constant. (More precisely: if this restricted version of the lonely runner conjecture is true for all {n \leq n_0}, then the original version of the conjecture is also true for all {n \leq n_0}.)

In principle, this proposition allows one to verify the lonely runner conjecture for a given {n} in finite time; however the number of cases to check with this proposition grows faster than exponentially in {n}, and so this is unfortunately not a feasible approach to verifying the lonely runner conjecture for more values of {n} than currently known.

One of the key tools needed to prove this proposition is the following additive combinatorics result. Recall that a generalised arithmetic progression (or {GAP}) in the reals {{\bf R}} is a set of the form

\displaystyle  P = \{ n_1 v_1 + \dots + n_d v_d: n_1,\dots,n_d \in {\bf Z}; |n_1| \leq N_1, \dots, |n_d| \leq N_d \}

for some {v_1,\dots,v_d \in {\bf R}} and {N_1,\dots,N_d > 0}; the quantity {d} is called the rank of the progression. If {t>0}, the progression {P} is said to be {t}-proper if the sums {n_1 v_1 + \dots + n_d v_d} with {|n_i| \leq t N_i} for {i=1,\dots,d} are all distinct. We have

Lemma 4 (Progressions lie inside proper progressions) Let {P} be a GAP of rank {d} in the reals, and let {t \geq 1}. Then {P} is contained in a {t}-proper GAP {Q} of rank at most {d}, with

\displaystyle |Q| \leq (2t)^d d^{6d^2} \prod_{i=1}^d (2N_i+1).

Proof: See Theorem 2.1 of this paper of Bilu. (Very similar results can also be found in Theorem 3.40 of my book with Van Vu, or Theorem 1.10 of this paper of mine with Van Vu.) \Box

Now let {n \geq 1}, and assume inductively that the lonely runner conjecture has been proven for all smaller values of {n}, as well as for the current value of {n} in the case that {v_1,\dots,v_n} are integers of size at most {n^{Cn^2}} for some sufficiently large {C}. We will show that the lonely runner conjecture holds in general for this choice of {n}.

let {v_1,\dots,v_n} be non-zero real numbers. Let {C_0} be a large absolute constant to be chosen later. From the above lemma applied to the GAP {\{ n_1 v_1 + \dots + n_d v_d: n_1,\dots,n_d \in \{-1,0,1\}\}}, one can find a {n^{C_0n}}-proper GAP {Q} of rank at most {n} containing {\{v_1,\dots,v_n\}} such that

\displaystyle  |Q| \leq (6n^{C_0 n})^n n^{6n^2};

in particular {|Q| \leq n^{Cn^2}} if {C} is large enough depending on {C_0}.

We write

\displaystyle  Q = \{ n_1 w_1 + \dots + n_d w_d: n_1,\dots,n_d \in {\bf Z}; |n_1| \leq N_1,\dots,|n_d| \leq N_d \}

for some {d \leq n}, {w_1,\dots,w_d}, and {N_1,\dots,N_d \geq 0}. We thus have {v_i = \phi(a_i)} for {i=1,\dots,n}, where {\phi: {\bf R}^d \rightarrow {\bf R}} is the linear map {\phi(n_1,\dots,n_d) := n_1 w_1 + \dots + n_d w_d} and {a_1,\dots,a_n \in {\bf Z}^d} are non-zero and lie in the box {\{ (n_1,\dots,n_d) \in {\bf R}^d: |n_1| \leq N_1,\dots,|n_d| \leq N_d \}}.

We now need an elementary lemma that allows us to create a “collision” between two of the {a_1,\dots,a_n} via a linear projection, without making any of the {a_i} collide with the origin:

Lemma 5 Let {a_1,\dots,a_n \in {\bf R}^d} be non-zero vectors that are not all collinear with the origin. Then, after replacing one or more of the {a_i} with their negatives {-a_i} if necessary, there exists a pair {a_i,a_j} such that {a_i-a_j \neq 0}, and such that none of the {a_1,\dots,a_n} is a scalar multiple of {a_i-a_j}.

Proof: We may assume that {d \geq 2}, since the {d \leq 1} case is vacuous. Applying a generic linear projection to {{\bf R}^2} (which does not affect collinearity, or the property that a given {a_k} is a scalar multiple of {a_i-a_j}), we may then reduce to the case {d=2}.

By a rotation and relabeling, we may assume that {a_1} lies on the negative {x}-axis; by flipping signs as necessary we may then assume that all of the {a_2,\dots,a_n} lie in the closed right half-plane. As the {a_i} are not all collinear with the origin, one of the {a_i} lies off of the {x}-axis, by relabeling, we may assume that {a_2} lies off of the {x} axis and makes a minimal angle with the {x}-axis. Then the angle of {a_2-a_1} with the {x}-axis is non-zero but smaller than any non-zero angle that any of the {a_i} make with this axis, and so none of the {a_i} are a scalar multiple of {a_2-a_1}, and the claim follows. \Box

We now return to the proof of the proposition. If the {a_1,\dots,a_n} are all collinear with the origin, then {\phi(a_1),\dots,\phi(a_n)} lie in a one-dimensional arithmetic progression {\{ mv: |m| \leq |Q| \}}, and then by rescaling we may take the {v_1,\dots,v_n} to be integers of magnitude at most {|Q| \leq n^{Cn}}, at which point we are done by hypothesis. Thus, we may assume that the {a_1,\dots,a_n} are not all collinear with the origin, and so by the above lemma and relabeling we may assume that {a_n-a_1} is non-zero, and that none of the {a_i} are scalar multiples of {a_n-a_1}.

We write

\displaystyle  a_n-a_1 = (c_1,\dots,c_d) \ \ \ \ \ (1)

with {|c_i| \leq 2 N_i} for {i=1,\dots,d}; by relabeling we may assume without loss of generality that {c_d} is non-zero, and furthermore that

\displaystyle  \frac{|c_i|}{N_i} \leq \frac{|c_d|}{N_d}

for {i=1,\dots,d}. We can also factor

\displaystyle  (c_1,\dots,c_d) = q (c'_1,\dots,c'_d) \ \ \ \ \ (2)

where {q} is a natural number and {c'_1,\dots,c'_d} have no common factor.

We now define a variant {\tilde \phi: {\bf R}^d \rightarrow {\bf R}} of {\phi: {\bf R}^d \rightarrow {\bf R}} by the map

\displaystyle  \tilde \phi(n_1,\dots,n_d) := n_1 \tilde w_1 + \dots + n_{d-1} \tilde w_{d-1} - \frac{n_d}{c_d} (c_1 \tilde w_1 + \dots + c_{d-1} \tilde w_{d-1}),

where the {\tilde w_1,\dots,\tilde w_{d-1}} are real numbers that are linearly independent over {{\bf Q}}, whose precise value will not be of importance in our argument. This is a linear map with the property that {\tilde \phi(a_n-a_1)=0}, so that {\tilde \phi(a_1),\dots,\tilde \phi(a_n)} consists of at most {n-1} distinct real numbers, which are non-zero since none of the {a_i} are scalar multiples of {a_n-a_1}, and the {\tilde w_i} are linearly independent over {{\bf Q}}. As we are assuming inductively that the lonely runner conjecture holds for {n-1}, we conclude (after deleting duplicates) that there exists at least one real number {\tilde t} such that

\displaystyle  \| \tilde t \tilde \phi(a_1) \|_{{\bf R}/{\bf Z}}, \dots, \| \tilde t \tilde \phi(a_n) \|_{{\bf R}/{\bf Z}} \geq \frac{1}{n}.

We would like to “approximate” {\phi} by {\tilde \phi} to then conclude that there is at least one real number {t} such that

\displaystyle  \| t \phi(a_1) \|_{{\bf R}/{\bf Z}}, \dots, \| t \phi(a_n) \|_{{\bf R}/{\bf Z}} \geq \frac{1}{n+1}.

It turns out that we can do this by a Fourier-analytic argument taking advantage of the {n^{C_0 n}}-proper nature of {Q}. Firstly, we see from the Dirichlet approximation theorem that one has

\displaystyle  \| \tilde t \tilde \phi(a_1) \|_{{\bf R}/{\bf Z}}, \dots, \| \tilde t \tilde \phi(a_n) \|_{{\bf R}/{\bf Z}} \leq \frac{1}{10 n^2}

for a set {\tilde t} of reals of (Banach) density {\gg n^{-O(n)}}. Thus, by the triangle inequality, we have

\displaystyle  \| \tilde t \tilde \phi(a_1) \|_{{\bf R}/{\bf Z}}, \dots, \| \tilde t \tilde \phi(a_n) \|_{{\bf R}/{\bf Z}} \geq \frac{1}{n} - \frac{1}{10n^2}

for a set {\tilde t} of reals of density {\gg n^{-O(n)}}.

Applying a smooth Fourier multiplier of Littlewood-Paley type, one can find a trigonometric polynomial

\displaystyle  \eta(x) = \sum_{m: |m| \leq n^{C_0 n/10}} b_m e^{2\pi i mx}

which takes values in {[0,1]}, is {\gg 1} for {\|x\|_{{\bf R}/{\bf Z}} \geq \frac{1}{n} - \frac{1}{10n^2}}, and is no larger than {O( n^{-100 C_0n} )} for {\|x\|_{{\bf R}/{\bf Z}} \leq \frac{1}{n+1}}. We then have

\displaystyle  \mathop{\bf E}_t \prod_{j=1}^n \eta( t \tilde \phi(a_j) ) \gg n^{-O(n)}

where {\mathop{\bf E}_t f(t)} denotes the mean value of a quasiperiodic function {f} on the reals {{\bf R}}. We expand the left-hand side out as

\displaystyle  \sum_{m_1,\dots,m_n: m_1 \tilde \phi(a_1) + \dots + m_n \tilde \phi(a_n) = 0} b_{m_1} \dots b_{m_n}.

From the genericity of {\tilde w_1,\dots,\tilde w_n}, we see that the constraint

\displaystyle  m_1 \tilde \phi(a_1) + \dots + m_n \tilde \phi(a_n) = 0

occurs if and only if {m_1 a_1 + \dots + m_n a_n} is a scalar multiple of {a_n-a_1}, or equivalently (by (1), (2)) an integer multiple of {(c'_1,\dots,c'_d)}. Thus

\displaystyle  \sum_{m_1,\dots,m_n: m_1 a_1 + \dots + m_n a_n \in {\bf Z} (c'_1,\dots,c'_d)} b_{m_1} \dots b_{m_n} \gg n^{-O(n)}. \ \ \ \ \ (3)

Next, we consider the average

\displaystyle  \mathop{\bf E}_t \varphi( t \xi ) \prod_{j=1}^n \eta( t v_j ) \ \ \ \ \ (4)

where

\displaystyle  \xi := c'_1 w_1 + \dots + c'_d w_d. \ \ \ \ \ (5)

and {\varphi} is the Dirichlet series

\displaystyle  \varphi(x) := \sum_{m: |m| \leq n^{C_0 n/2}} e^{2\pi i mx}.

By Fourier expansion and writing {v_j = \phi(a_j)}, we may write (4) as

\displaystyle  \sum_{m,m_1,\dots,m_n: |m| \leq n^{C_0n/2}; m_1 \phi(a_1) + \dots + m_n \phi(a_n) = m \xi} b_{m_1} \dots b_{m_n}.

The support of the {b_{m_i}} implies that {|m_i| \leq n^{C_0n/10}}. Because of the {n^{C_0 n}}-properness of {Q}, we see (for {n} large enough) that the equation

\displaystyle  m_1 \phi(a_1) + \dots + m_n \phi(a_n) = m \xi \ \ \ \ \ (6)

implies that

\displaystyle  m_1 a_1 + \dots + m_n a_n \in {\bf Z} (c'_1,\dots,c'_d) \ \ \ \ \ (7)

and conversely that (7) implies that (6) holds for some {m} with {|m| \leq n^{C_0 n/2}}. From (3) we thus have

\displaystyle  \mathop{\bf E}_t \varphi( t \xi ) \prod_{j=1}^n \eta( t v_j ) \gg n^{-O(1)}.

In particular, there exists a {t} such that

\displaystyle  \varphi( t \xi ) \prod_{j=1}^n \eta( t v_j ) \gg n^{-O(1)}.

Since {\varphi} is bounded in magnitude by {n^{C_0n/2}}, and {\eta} is bounded by {1}, we thus have

\displaystyle  \eta(t v_j) \gg n^{-C_0 n/2 - O(1)}

for each {1 \leq j \leq n}, which by the size properties of {\eta} implies that {\|tv_j\|_{{\bf R}/{\bf Z}} \geq \frac{1}{n+1}} for all {1 \leq j \leq n}, giving the lonely runner conjecture for {n}.

The von Neumann ergodic theorem (the Hilbert space version of the mean ergodic theorem) asserts that if {U: H \rightarrow H} is a unitary operator on a Hilbert space {H}, and {v \in H} is a vector in that Hilbert space, then one has

\displaystyle \lim_{N \rightarrow \infty} \frac{1}{N} \sum_{n=1}^N U^n v = \pi_{H^U} v

in the strong topology, where {H^U := \{ w \in H: Uw = w \}} is the {U}-invariant subspace of {H}, and {\pi_{H^U}} is the orthogonal projection to {H^U}. (See e.g. these previous lecture notes for a proof.) The same proof extends to more general amenable groups: if {G} is a countable amenable group acting on a Hilbert space {H} by unitary transformations {T^g: H \rightarrow H} for {g \in G}, and {v \in H} is a vector in that Hilbert space, then one has

\displaystyle \lim_{N \rightarrow \infty} \mathop{\bf E}_{g \in \Phi_N} T^g v = \pi_{H^G} v \ \ \ \ \ (1)

 

for any Folner sequence {\Phi_N} of {G}, where {H^G := \{ w \in H: T^g w = w \hbox{ for all }g \in G \}} is the {G}-invariant subspace, and {\mathop{\bf E}_{a \in A} f(a) := \frac{1}{|A|} \sum_{a \in A} f(a)} is the average of {f} on {A}. Thus one can interpret {\pi_{H^G} v} as a certain average of elements of the orbit {Gv := \{ T^g v: g \in G \}} of {v}.

In a previous blog post, I noted a variant of this ergodic theorem (due to Alaoglu and Birkhoff) that holds even when the group {G} is not amenable (or not discrete), using a more abstract notion of averaging:

Theorem 1 (Abstract ergodic theorem) Let {G} be an arbitrary group acting unitarily on a Hilbert space {H}, and let {v} be a vector in {H}. Then {\pi_{H^G} v} is the element in the closed convex hull of {Gv := \{ T^g v: g \in G \}} of minimal norm, and is also the unique element of {H^G} in this closed convex hull.

I recently stumbled upon a different way to think about this theorem, in the additive case {G = (G,+)} when {G} is abelian, which has a closer resemblance to the classical mean ergodic theorem. Given an arbitrary additive group {G = (G,+)} (not necessarily discrete, or countable), let {{\mathcal F}} denote the collection of finite non-empty multisets in {G} – that is to say, unordered collections {\{a_1,\dots,a_n\}} of elements {a_1,\dots,a_n} of {G}, not necessarily distinct, for some positive integer {n}. Given two multisets {A = \{a_1,\dots,a_n\}}, {B = \{b_1,\dots,b_m\}} in {{\mathcal F}}, we can form the sum set {A + B := \{ a_i + b_j: 1 \leq i \leq n, 1 \leq j \leq m \}}. Note that the sum set {A+B} can contain multiplicity even when {A, B} do not; for instance, {\{ 1,2\} + \{1,2\} = \{2,3,3,4\}}. Given a multiset {A = \{a_1,\dots,a_n\}} in {{\mathcal F}}, and a function {f: G \rightarrow H} from {G} to a vector space {H}, we define the average {\mathop{\bf E}_{a \in A} f(a)} as

\displaystyle \mathop{\bf E}_{a \in A} f(a) = \frac{1}{n} \sum_{j=1}^n f(a_j).

Note that the multiplicity function of the set {A} affects the average; for instance, we have {\mathop{\bf E}_{a \in \{1,2\}} a = \frac{3}{2}}, but {\mathop{\bf E}_{a \in \{1,2,2\}} a = \frac{5}{3}}.

We can define a directed set on {{\mathcal F}} as follows: given two multisets {A,B \in {\mathcal F}}, we write {A \geq B} if we have {A = B+C} for some {C \in {\mathcal F}}. Thus for instance we have {\{ 1, 2, 2, 3\} \geq \{1,2\}}. It is easy to verify that this operation is transitive and reflexive, and is directed because any two elements {A,B} of {{\mathcal F}} have a common upper bound, namely {A+B}. (This is where we need {G} to be abelian.) The notion of convergence along a net, now allows us to define the notion of convergence along {{\mathcal F}}; given a family {x_A} of points in a topological space {X} indexed by elements {A} of {{\mathcal F}}, and a point {x} in {X}, we say that {x_A} converges to {x} along {{\mathcal F}} if, for every open neighbourhood {U} of {x} in {X}, one has {x_A \in U} for sufficiently large {A}, that is to say there exists {B \in {\mathcal F}} such that {x_A \in U} for all {A \geq B}. If the topological space {V} is Hausdorff, then the limit {x} is unique (if it exists), and we then write

\displaystyle x = \lim_{A \rightarrow G} x_A.

When {x_A} takes values in the reals, one can also define the limit superior or limit inferior along such nets in the obvious fashion.

We can then give an alternate formulation of the abstract ergodic theorem in the abelian case:

Theorem 2 (Abelian abstract ergodic theorem) Let {G = (G,+)} be an arbitrary additive group acting unitarily on a Hilbert space {H}, and let {v} be a vector in {H}. Then we have

\displaystyle \pi_{H^G} v = \lim_{A \rightarrow G} \mathop{\bf E}_{a \in A} T^a v

in the strong topology of {H}.

Proof: Suppose that {A \geq B}, so that {A=B+C} for some {C \in {\mathcal F}}, then

\displaystyle \mathop{\bf E}_{a \in A} T^a v = \mathop{\bf E}_{c \in C} T^c ( \mathop{\bf E}_{b \in B} T^b v )

so by unitarity and the triangle inequality we have

\displaystyle \| \mathop{\bf E}_{a \in A} T^a v \|_H \leq \| \mathop{\bf E}_{b \in B} T^b v \|_H,

thus {\| \mathop{\bf E}_{a \in A} T^a v \|_H^2} is monotone non-increasing in {A}. Since this quantity is bounded between {0} and {\|v\|_H}, we conclude that the limit {\lim_{A \rightarrow G} \| \mathop{\bf E}_{a \in A} T^a v \|_H^2} exists. Thus, for any {\varepsilon > 0}, we have for sufficiently large {A} that

\displaystyle \| \mathop{\bf E}_{b \in B} T^b v \|_H^2 \geq \| \mathop{\bf E}_{a \in A} T^a v \|_H^2 - \varepsilon

for all {B \geq A}. In particular, for any {g \in G}, we have

\displaystyle \| \mathop{\bf E}_{b \in A + \{0,g\}} T^b v \|_H^2 \geq \| \mathop{\bf E}_{a \in A} T^a v \|_H^2 - \varepsilon.

We can write

\displaystyle \mathop{\bf E}_{b \in A + \{0,g\}} T^b v = \frac{1}{2} \mathop{\bf E}_{a \in A} T^a v + \frac{1}{2} T^g \mathop{\bf E}_{a \in A} T^a v

and so from the parallelogram law and unitarity we have

\displaystyle \| \mathop{\bf E}_{a \in A} T^a v - T^g \mathop{\bf E}_{a \in A} T^a v \|_H^2 \leq 4 \varepsilon

for all {g \in G}, and hence by the triangle inequality (averaging {g} over a finite multiset {C})

\displaystyle \| \mathop{\bf E}_{a \in A} T^a v - \mathop{\bf E}_{b \in A+C} T^b v \|_H^2 \leq 4 \varepsilon

for any {C \in {\mathcal F}}. This shows that {\mathop{\bf E}_{a \in A} T^a v} is a Cauchy sequence in {H} (in the strong topology), and hence (by the completeness of {H}) tends to a limit. Shifting {A} by a group element {g}, we have

\displaystyle \lim_{A \rightarrow G} \mathop{\bf E}_{a \in A} T^a v = \lim_{A \rightarrow G} \mathop{\bf E}_{a \in A + \{g\}} T^a v = T^g \lim_{A \rightarrow G} \mathop{\bf E}_{a \in A} T^a v

and hence {\lim_{A \rightarrow G} \mathop{\bf E}_{a \in A} T^a v} is invariant under shifts, and thus lies in {H^G}. On the other hand, for any {w \in H^G} and {A \in {\mathcal F}}, we have

\displaystyle \langle \mathop{\bf E}_{a \in A} T^a v, w \rangle_H = \mathop{\bf E}_{a \in A} \langle v, T^{-a} w \rangle_H = \langle v, w \rangle_H

and thus on taking strong limits

\displaystyle \langle \lim_{A \rightarrow G} \mathop{\bf E}_{a \in A} T^a v, w \rangle_H = \langle v, w \rangle_H

and so {v - \lim_{A \rightarrow G} \mathop{\bf E}_{a \in A} T^a v} is orthogonal to {H^G}. Combining these two facts we see that {\lim_{A \rightarrow G} \mathop{\bf E}_{a \in A} T^a v} is equal to {\pi_{H^G} v} as claimed. \Box

To relate this result to the classical ergodic theorem, we observe

Lemma 3 Let {G} be a countable additive group, with a F{\o}lner sequence {\Phi_n}, and let {f_g} be a bounded sequence in a normed vector space indexed by {G}. If {\lim_{A \rightarrow G} \mathop{\bf E}_{a \in A} f_a} exists, then {\lim_{n \rightarrow \infty} \mathop{\bf E}_{a \in \Phi_n} f_a} exists, and the two limits are equal.

Proof: From the F{\o}lner property, we see that for any {A} and any {\varepsilon>0}, the averages {\mathop{\bf E}_{a \in \Phi_n} f_a} and {\mathop{\bf E}_{a \in A+\Phi_n} f_a} differ by at most {\varepsilon} in norm if {n} is sufficiently large depending on {A}, {\varepsilon} (and the {f_a}). On the other hand, by the existence of the limit {\lim_{A \rightarrow G} \mathop{\bf E}_{a \in A} f_a}, the averages {\mathop{\bf E}_{a \in A} f_a} and {\mathop{\bf E}_{a \in A + \Phi_n} f_a} differ by at most {\varepsilon} in norm if {A} is sufficiently large depending on {\varepsilon} (regardless of how large {n} is). The claim follows. \Box

It turns out that this approach can also be used as an alternate way to construct the GowersHost-Kra seminorms in ergodic theory, which has the feature that it does not explicitly require any amenability on the group {G} (or separability on the underlying measure space), though, as pointed out to me in comments, even uncountable abelian groups are amenable in the sense of possessing an invariant mean, even if they do not have a F{\o}lner sequence.

Given an arbitrary additive group {G}, define a {G}-system {({\mathrm X}, T)} to be a probability space {{\mathrm X} = (X, {\mathcal X}, \mu)} (not necessarily separable or standard Borel), together with a collection {T^g: X \rightarrow X} of invertible, measure-preserving maps, such that {T^0} is the identity and {T^g T^h = T^{g+h}} (modulo null sets) for all {g,h \in G}. This then gives isomorphisms {T^g: L^p({\mathrm X}) \rightarrow L^p({\mathrm X})} for {1 \leq p \leq \infty} by setting {T^g f(x) := f(T^{-g} x)}. From the above abstract ergodic theorem, we see that

\displaystyle {\mathbf E}( f | {\mathcal X}^G ) = \lim_{A \rightarrow G} \mathop{\bf E}_{a \in A} T^g f

in the strong topology of {L^2({\mathrm X})} for any {f \in L^2({\mathrm X})}, where {{\mathcal X}^G} is the collection of measurable sets {E} that are essentially {G}-invariant in the sense that {T^g E = E} modulo null sets for all {g \in G}, and {{\mathbf E}(f|{\mathcal X}^G)} is the conditional expectation of {f} with respect to {{\mathcal X}^G}.

In a similar spirit, we have

Theorem 4 (Convergence of Gowers-Host-Kra seminorms) Let {({\mathrm X},T)} be a {G}-system for some additive group {G}. Let {d} be a natural number, and for every {\omega \in\{0,1\}^d}, let {f_\omega \in L^{2^d}({\mathrm X})}, which for simplicity we take to be real-valued. Then the expression

\displaystyle \langle (f_\omega)_{\omega \in \{0,1\}^d} \rangle_{U^d({\mathrm X})} := \lim_{A_1,\dots,A_d \rightarrow G}

\displaystyle \mathop{\bf E}_{h_1 \in A_1-A_1,\dots,h_d \in A_d-A_d} \int_X \prod_{\omega \in \{0,1\}^d} T^{\omega_1 h_1 + \dots + \omega_d h_d} f_\omega\ d\mu

converges, where we write {\omega = (\omega_1,\dots,\omega_d)}, and we are using the product direct set on {{\mathcal F}^d} to define the convergence {A_1,\dots,A_d \rightarrow G}. In particular, for {f \in L^{2^d}({\mathrm X})}, the limit

\displaystyle \| f \|_{U^d({\mathrm X})}^{2^d} = \lim_{A_1,\dots,A_d \rightarrow G}

\displaystyle \mathop{\bf E}_{h_1 \in A_1-A_1,\dots,h_d \in A_d-A_d} \int_X \prod_{\omega \in \{0,1\}^d} T^{\omega_1 h_1 + \dots + \omega_d h_d} f\ d\mu

converges.

We prove this theorem below the fold. It implies a number of other known descriptions of the Gowers-Host-Kra seminorms {\|f\|_{U^d({\mathrm X})}}, for instance that

\displaystyle \| f \|_{U^d({\mathrm X})}^{2^d} = \lim_{A \rightarrow G} \mathop{\bf E}_{h \in A-A} \| f T^h f \|_{U^{d-1}({\mathrm X})}^{2^{d-1}}

for {d > 1}, while from the ergodic theorem we have

\displaystyle \| f \|_{U^1({\mathrm X})} = \| {\mathbf E}( f | {\mathcal X}^G ) \|_{L^2({\mathrm X})}.

This definition also manifestly demonstrates the cube symmetries of the Host-Kra measures {\mu^{[d]}} on {X^{\{0,1\}^d}}, defined via duality by requiring that

\displaystyle \langle (f_\omega)_{\omega \in \{0,1\}^d} \rangle_{U^d({\mathrm X})} = \int_{X^{\{0,1\}^d}} \bigotimes_{\omega \in \{0,1\}^d} f_\omega\ d\mu^{[d]}.

In a subsequent blog post I hope to present a more detailed study of the {U^2} norm and its relationship with eigenfunctions and the Kronecker factor, without assuming any amenability on {G} or any separability or topological structure on {{\mathrm X}}.

Read the rest of this entry »

We return to the study of the Riemann zeta function {\zeta(s)}, focusing now on the task of upper bounding the size of this function within the critical strip; as seen in Exercise 43 of Notes 2, such upper bounds can lead to zero-free regions for {\zeta}, which in turn lead to improved estimates for the error term in the prime number theorem.

In equation (21) of Notes 2 we obtained the somewhat crude estimates

\displaystyle  \zeta(s) = \sum_{n \leq x} \frac{1}{n^s} - \frac{x^{1-s}}{1-s} + O( \frac{|s|}{\sigma} \frac{1}{x^\sigma} ) \ \ \ \ \ (1)

for any {x > 0} and {s = \sigma+it} with {\sigma>0} and {s \neq 1}. Setting {x=1}, we obtained the crude estimate

\displaystyle  \zeta(s) = \frac{1}{s-1} + O( \frac{|s|}{\sigma} )

in this region. In particular, if {0 < \varepsilon \leq \sigma \ll 1} and {|t| \gg 1} then we had {\zeta(s) = O_\varepsilon( |t| )}. Using the functional equation and the Hadamard three lines lemma, we can improve this to {\zeta(s) \ll_\varepsilon |t|^{\frac{1-\sigma}{2}+\varepsilon}}; see Supplement 3.

Now we seek better upper bounds on {\zeta}. We will reduce the problem to that of bounding certain exponential sums, in the spirit of Exercise 33 of Supplement 3:

Proposition 1 Let {s = \sigma+it} with {0 < \varepsilon \leq \sigma \ll 1} and {|t| \gg 1}. Then

\displaystyle  \zeta(s) \ll_\varepsilon \log(2+|t|) \sup_{1 \leq M \leq N \ll |t|} N^{1-\sigma} |\frac{1}{N} \sum_{N \leq n < N+M} e( -\frac{t}{2\pi} \log n)|

where {e(x) := e^{2\pi i x}}.

Proof: We fix a smooth function {\eta: {\bf R} \rightarrow {\bf C}} with {\eta(t)=1} for {t \leq -1} and {\eta(t)=0} for {t \geq 1}, and allow implied constants to depend on {\eta}. Let {s=\sigma+it} with {\varepsilon \leq \sigma \ll 1}. From Exercise 33 of Supplement 3, we have

\displaystyle  \zeta(s) = \sum_n \frac{1}{n^s} \eta( \log n - \log C|t| ) + O_\varepsilon( 1 )

for some sufficiently large absolute constant {C}. By dyadic decomposition, we thus have

\displaystyle  \zeta(s) \ll_{\varepsilon} 1 + \log(2+|t|) \sup_{1 \leq N \ll |t|} |\sum_{N \leq n < 2N} \frac{1}{n^s} \eta( \log n - \log C|t| )|.

We can absorb the first term in the second using the {N=1} case of the supremum. Writing {\frac{1}{n^s} \eta( \log n - \log|C| t ) = N^{-\sigma} e( - \frac{t}{2\pi} \log n ) F_N(n)}, where

\displaystyle  F_N(n) := (N/n)^\sigma \eta(\log n - \log C|t| ),

it thus suffices to show that

\displaystyle  \sum_{N \leq n < 2N} e(-\frac{t}{2\pi} \log N) F_N(n) \ll \sup_{1 \leq M \leq N} |\sum_{N \leq n < N+M} e(-\frac{t}{2\pi} \log n)|

for each {N}. But from the fundamental theorem of calculus, the left-hand side can be written as

\displaystyle  F_N(2N) \sum_{N \leq n < 2N} e(-\frac{t}{2\pi} \log n)

\displaystyle - \int_0^{N} (\sum_{N \leq n < N+M} e(-\frac{t}{2\pi} \log n)) F'_N(M)\ dM

and the claim then follows from the triangle inequality and a routine calculation. \Box

We are thus interested in getting good bounds on the sum {\sum_{N \leq n < N+M} e( -\frac{t}{2\pi} \log n )}. More generally, we consider normalised exponential sums of the form

\displaystyle  \frac{1}{N} \sum_{n \in I} e( f(n) ) \ \ \ \ \ (2)

where {I \subset {\bf R}} is an interval of length at most {N} for some {N \geq 1}, and {f: {\bf R} \rightarrow {\bf R}} is a smooth function. We will assume smoothness estimates of the form

\displaystyle  |f^{(j)}(x)| = \exp( O(j^2) ) \frac{T}{N^j} \ \ \ \ \ (3)

for some {T>0}, all {x \in I}, and all {j \geq 1}, where {f^{(j)}} is the {j}-fold derivative of {f}; in the case {f(x) := -\frac{t}{2\pi} \log x}, {I \subset [N,2N]} of interest for the Riemann zeta function, we easily verify that these estimates hold with {T := |t|}. (One can consider exponential sums under more general hypotheses than (3), but the hypotheses here are adequate for our needs.) We do not bound the zeroth derivative {f^{(0)}=f} of {f} directly, but it would not be natural to do so in any event, since the magnitude of the sum (2) is unaffected if one adds an arbitrary constant to {f(n)}.

The trivial bound for (2) is

\displaystyle  \frac{1}{N} \sum_{n \in I} e(f(n)) \ll 1 \ \ \ \ \ (4)

and we will seek to obtain significant improvements to this bound. Pseudorandomness heuristics predict a bound of {O_\varepsilon(N^{-1/2+\varepsilon})} for (2) for any {\varepsilon>0} if {T = O(N^{O(1)})}; this assertion (a special case of the exponent pair hypothesis) would have many consequences (for instance, inserting it into Proposition 1 soon yields the Lindelöf hypothesis), but is unfortunately quite far from resolution with known methods. However, we can obtain weaker gains of the form {O(N^{1-c_K})} when {T \ll N^K} and {c_K > 0} depends on {K}. We present two such results here, which perform well for small and large values of {K} respectively:

Theorem 2 Let {2 \leq N \ll T}, let {I} be an interval of length at most {N}, and let {f: I \rightarrow {\bf R}} be a smooth function obeying (3) for all {j \geq 1} and {x \in I}.

  • (i) (van der Corput estimate) For any natural number {k \geq 2}, one has

    \displaystyle  \frac{1}{N} \sum_{n \in I} e( f(n) ) \ll (\frac{T}{N^k})^{\frac{1}{2^k-2}} \log^{1/2} (2+T). \ \ \ \ \ (5)

  • (ii) (Vinogradov estimate) If {k} is a natural number and {T \leq N^{k}}, then

    \displaystyle  \frac{1}{N} \sum_{n \in I} e( f(n) ) \ll N^{-c/k^2} \ \ \ \ \ (6)

    for some absolute constant {c>0}.

The factor of {\log^{1/2} (2+T)} can be removed by a more careful argument, but we will not need to do so here as we are willing to lose powers of {\log T}. The estimate (6) is superior to (5) when {T \sim N^K} for {K} large, since (after optimising in {k}) (5) gives a gain of the form {N^{-c/2^{cK}}} over the trivial bound, while (6) gives {N^{-c/K^2}}. We have not attempted to obtain completely optimal estimates here, settling for a relatively simple presentation that still gives good bounds on {\zeta}, and there are a wide variety of additional exponential sum estimates beyond the ones given here; see Chapter 8 of Iwaniec-Kowalski, or Chapters 3-4 of Montgomery, for further discussion.

We now briefly discuss the strategies of proof of Theorem 2. Both parts of the theorem proceed by treating {f} like a polynomial of degree roughly {k}; in the case of (ii), this is done explicitly via Taylor expansion, whereas for (i) it is only at the level of analogy. Both parts of the theorem then try to “linearise” the phase to make it a linear function of the summands (actually in part (ii), it is necessary to introduce an additional variable and make the phase a bilinear function of the summands). The van der Corput estimate achieves this linearisation by squaring the exponential sum about {k} times, which is why the gain is only exponentially small in {k}. The Vinogradov estimate achieves linearisation by raising the exponential sum to a significantly smaller power – on the order of {k^2} – by using Hölder’s inequality in combination with the fact that the discrete curve {\{ (n,n^2,\dots,n^k): n \in \{1,\dots,M\}\}} becomes roughly equidistributed in the box {\{ (a_1,\dots,a_k): a_j = O( M^j ) \}} after taking the sumset of about {k^2} copies of this curve. This latter fact has a precise formulation, known as the Vinogradov mean value theorem, and its proof is the most difficult part of the argument, relying on using a “{p}-adic” version of this equidistribution to reduce the claim at a given scale {M} to a smaller scale {M/p} with {p \sim M^{1/k}}, and then proceeding by induction.

One can combine Theorem 2 with Proposition 1 to obtain various bounds on the Riemann zeta function:

Exercise 3 (Subconvexity bound)

  • (i) Show that {\zeta(\frac{1}{2}+it) \ll (1+|t|)^{1/6} \log^{O(1)}(1+|t|)} for all {t \in {\bf R}}. (Hint: use the {k=3} case of the Van der Corput estimate.)
  • (ii) For any {0 < \sigma < 1}, show that {\zeta(\sigma+it) \ll (1+|t|)^{\min( \frac{1-\sigma}{3}, \frac{1}{2} - \frac{\sigma}{3}) + o(1)}} as {|t| \rightarrow \infty}.

Exercise 4 Let {t} be such that {|t| \geq 100}, and let {\sigma \geq 1/2}.

  • (i) (Littlewood bound) Use the van der Corput estimate to show that {\zeta(\sigma+it) \ll \log^{O(1)} |t|} whenever {\sigma \geq 1 - O( \frac{(\log\log |t|)^2}{\log |t|} ))}.
  • (ii) (Vinogradov-Korobov bound) Use the Vinogradov estimate to show that {\zeta(\sigma+it) \ll \log^{O(1)} |t|} whenever {\sigma \geq 1 - O( \frac{(\log\log |t|)^{2/3}}{\log^{2/3} |t|} )}.

As noted in Exercise 43 of Notes 2, the Vinogradov-Korobov bound leads to the zero-free region {\{ \sigma+it: \sigma > 1 - c \frac{1}{(\log |t|)^{2/3} (\log\log |t|)^{1/3}}; |t| \geq 100 \}}, which in turn leads to the prime number theorem with error term

\displaystyle  \sum_{n \leq x} \Lambda(n) = x + O\left( x \exp\left( - c \frac{\log^{3/5} x}{(\log\log x)^{1/5}} \right) \right)

for {x > 100}. If one uses the weaker Littlewood bound instead, one obtains the narrower zero-free region

\displaystyle  \{ \sigma+it: \sigma > 1 - c \frac{\log\log|t|}{\log |t|}; |t| \geq 100 \}

(which is only slightly wider than the classical zero-free region) and an error term

\displaystyle  \sum_{n \leq x} \Lambda(n) = x + O( x \exp( - c \sqrt{\log x \log\log x} ) )

in the prime number theorem.

Exercise 5 (Vinogradov-Korobov in arithmetic progressions) Let {\chi} be a non-principal character of modulus {q}.

  • (i) (Vinogradov-Korobov bound) Use the Vinogradov estimate to show that {L(\sigma+it,\chi) \ll \log^{O(1)}(q|t|)} whenever {|t| \geq 100} and

    \displaystyle  \sigma \geq 1 - O( \min( \frac{\log\log(q|t|)}{\log q}, \frac{(\log\log(q|t|))^{2/3}}{\log^{2/3} |t|} ) ).

    (Hint: use the Vinogradov estimate and a change of variables to control {\sum_{n \in I: n = a\ (q)} \exp( -it \log n)} for various intervals {I} of length at most {N} and residue classes {a\ (q)}, in the regime {N \geq q^2} (say). For {N < q^2}, do not try to capture any cancellation and just use the triangle inequality instead.)

  • (ii) Obtain a zero-free region

    \displaystyle  \{ \sigma+it: \sigma > 1 - c \min( \frac{1}{(\log |t|)^{2/3} (\log\log |t|)^{1/3}}, \frac{1}{\log q} );

    \displaystyle  |t| \geq 100 \}

    for {L(s,\chi)}, for some (effective) absolute constant {c>0}.

  • (iii) Obtain the prime number theorem in arithmetic progressions with error term

    \displaystyle  \sum_{n \leq x: n = a\ (q)} \Lambda(n) = x + O\left( x \exp\left( - c_A \frac{\log^{3/5} x}{(\log\log x)^{1/5}} \right) \right)

    whenever {x > 100}, {q \leq \log^A x}, {a\ (q)} is primitive, and {c_A>0} depends (ineffectively) on {A}.

Read the rest of this entry »

In Notes 2, the Riemann zeta function {\zeta} (and more generally, the Dirichlet {L}-functions {L(\cdot,\chi)}) were extended meromorphically into the region {\{ s: \hbox{Re}(s) > 0 \}} in and to the right of the critical strip. This is a sufficient amount of meromorphic continuation for many applications in analytic number theory, such as establishing the prime number theorem and its variants. The zeroes of the zeta function in the critical strip {\{ s: 0 < \hbox{Re}(s) < 1 \}} are known as the non-trivial zeroes of {\zeta}, and thanks to the truncated explicit formulae developed in Notes 2, they control the asymptotic distribution of the primes (up to small errors).

The {\zeta} function obeys the trivial functional equation

\displaystyle  \zeta(\overline{s}) = \overline{\zeta(s)} \ \ \ \ \ (1)

for all {s} in its domain of definition. Indeed, as {\zeta(s)} is real-valued when {s} is real, the function {\zeta(s) - \overline{\zeta(\overline{s})}} vanishes on the real line and is also meromorphic, and hence vanishes everywhere. Similarly one has the functional equation

\displaystyle  \overline{L(s, \chi)} = L(\overline{s}, \overline{\chi}). \ \ \ \ \ (2)

From these equations we see that the zeroes of the zeta function are symmetric across the real axis, and the zeroes of {L(\cdot,\chi)} are the reflection of the zeroes of {L(\cdot,\overline{\chi})} across this axis.

It is a remarkable fact that these functions obey an additional, and more non-trivial, functional equation, this time establishing a symmetry across the critical line {\{ s: \hbox{Re}(s) = \frac{1}{2} \}} rather than the real axis. One consequence of this symmetry is that the zeta function and {L}-functions may be extended meromorphically to the entire complex plane. For the zeta function, the functional equation was discovered by Riemann, and reads as follows:

Theorem 1 (Functional equation for the Riemann zeta function) The Riemann zeta function {\zeta} extends meromorphically to the entire complex plane, with a simple pole at {s=1} and no other poles. Furthermore, one has the functional equation

\displaystyle  \zeta(s) = \alpha(s) \zeta(1-s) \ \ \ \ \ (3)

or equivalently

\displaystyle  \zeta(1-s) = \alpha(1-s) \zeta(s) \ \ \ \ \ (4)

for all complex {s} other than {s=0,1}, where {\alpha} is the function

\displaystyle  \alpha(s) := 2^s \pi^{s-1} \sin( \frac{\pi s}{2}) \Gamma(1-s). \ \ \ \ \ (5)

Here {\cos(z) := \frac{e^z + e^{-z}}{2}}, {\sin(z) := \frac{e^{-z}-e^{-z}}{2i}} are the complex-analytic extensions of the classical trigionometric functions {\cos(x), \sin(x)}, and {\Gamma} is the Gamma function, whose definition and properties we review below the fold.

The functional equation can be placed in a more symmetric form as follows:

Corollary 2 (Functional equation for the Riemann xi function) The Riemann xi function

\displaystyle  \xi(s) := \frac{1}{2} s(s-1) \pi^{-s/2} \Gamma(\frac{s}{2}) \zeta(s) \ \ \ \ \ (6)

is analytic on the entire complex plane {{\bf C}} (after removing all removable singularities), and obeys the functional equations

\displaystyle  \xi(\overline{s}) = \overline{\xi(s)}

and

\displaystyle  \xi(s) = \xi(1-s). \ \ \ \ \ (7)

In particular, the zeroes of {\xi} consist precisely of the non-trivial zeroes of {\zeta}, and are symmetric about both the real axis and the critical line. Also, {\xi} is real-valued on the critical line and on the real axis.

Corollary 2 is an easy consequence of Theorem 1 together with the duplication theorem for the Gamma function, and the fact that {\zeta} has no zeroes to the right of the critical strip, and is left as an exercise to the reader (Exercise 19). The functional equation in Theorem 1 has many proofs, but most of them are related in on way or another to the Poisson summation formula

\displaystyle  \sum_n f(n) = \sum_m \hat f(2\pi m) \ \ \ \ \ (8)

(Theorem 34 from Supplement 2, at least in the case when {f} is twice continuously differentiable and compactly supported), which can be viewed as a Fourier-analytic link between the coarse-scale distribution of the integers and the fine-scale distribution of the integers. Indeed, there is a quick heuristic proof of the functional equation that comes from formally applying the Poisson summation formula to the function {1_{x>0} \frac{1}{x^s}}, and noting that the functions {x \mapsto \frac{1}{x^s}} and {\xi \mapsto \frac{1}{\xi^{1-s}}} are formally Fourier transforms of each other, up to some Gamma function factors, as well as some trigonometric factors arising from the distinction between the real line and the half-line. Such a heuristic proof can indeed be made rigorous, and we do so below the fold, while also providing Riemann’s two classical proofs of the functional equation.

From the functional equation (and the poles of the Gamma function), one can see that {\zeta} has trivial zeroes at the negative even integers {-2,-4,-6,\dots}, in addition to the non-trivial zeroes in the critical strip. More generally, the following table summarises the zeroes and poles of the various special functions appearing in the functional equation, after they have been meromorphically extended to the entire complex plane, and with zeroes classified as “non-trivial” or “trivial” depending on whether they lie in the critical strip or not. (Exponential functions such as {2^{s-1}} or {\pi^{-s}} have no zeroes or poles, and will be ignored in this table; the zeroes and poles of rational functions such as {s(s-1)} are self-evident and will also not be displayed here.)

Function Non-trivial zeroes Trivial zeroes Poles
{\zeta(s)} Yes {-2,-4,-6,\dots} {1}
{\zeta(1-s)} Yes {1,3,5,\dots} {0}
{\sin(\pi s/2)} No Even integers No
{\cos(\pi s/2)} No Odd integers No
{\sin(\pi s)} No Integers No
{\Gamma(s)} No No {0,-1,-2,\dots}
{\Gamma(s/2)} No No {0,-2,-4,\dots}
{\Gamma(1-s)} No No {1,2,3,\dots}
{\Gamma((1-s)/2)} No No {2,4,6,\dots}
{\xi(s)} Yes No No

Among other things, this table indicates that the Gamma and trigonometric factors in the functional equation are tied to the trivial zeroes and poles of zeta, but have no direct bearing on the distribution of the non-trivial zeroes, which is the most important feature of the zeta function for the purposes of analytic number theory, beyond the fact that they are symmetric about the real axis and critical line. In particular, the Riemann hypothesis is not going to be resolved just from further analysis of the Gamma function!

The zeta function computes the “global” sum {\sum_n \frac{1}{n^s}}, with {n} ranging all the way from {1} to infinity. However, by some Fourier-analytic (or complex-analytic) manipulation, it is possible to use the zeta function to also control more “localised” sums, such as {\sum_n \frac{1}{n^s} \psi(\log n - \log N)} for some {N \gg 1} and some smooth compactly supported function {\psi: {\bf R} \rightarrow {\bf C}}. It turns out that the functional equation (3) for the zeta function localises to this context, giving an approximate functional equation which roughly speaking takes the form

\displaystyle  \sum_n \frac{1}{n^s} \psi( \log n - \log N ) \approx \alpha(s) \sum_m \frac{1}{m^{1-s}} \psi( \log M - \log m )

whenever {s=\sigma+it} and {NM = \frac{|t|}{2\pi}}; see Theorem 38 below for a precise formulation of this equation. Unsurprisingly, this form of the functional equation is also very closely related to the Poisson summation formula (8), indeed it is essentially a special case of that formula (or more precisely, of the van der Corput {B}-process). This useful identity relates long smoothed sums of {\frac{1}{n^s}} to short smoothed sums of {\frac{1}{m^{1-s}}} (or vice versa), and can thus be used to shorten exponential sums involving terms such as {\frac{1}{n^s}}, which is useful when obtaining some of the more advanced estimates on the Riemann zeta function.

We will give two other basic uses of the functional equation. The first is to get a good count (as opposed to merely an upper bound) on the density of zeroes in the critical strip, establishing the Riemann-von Mangoldt formula that the number {N(T)} of zeroes of imaginary part between {0} and {T} is {\frac{T}{2\pi} \log \frac{T}{2\pi} - \frac{T}{2\pi} + O(\log T)} for large {T}. The other is to obtain untruncated versions of the explicit formula from Notes 2, giving a remarkable exact formula for sums involving the von Mangoldt function in terms of zeroes of the Riemann zeta function. These results are not strictly necessary for most of the material in the rest of the course, but certainly help to clarify the nature of the Riemann zeta function and its relation to the primes.

In view of the material in previous notes, it should not be surprising that there are analogues of all of the above theory for Dirichlet {L}-functions {L(\cdot,\chi)}. We will restrict attention to primitive characters {\chi}, since the {L}-function for imprimitive characters merely differs from the {L}-function of the associated primitive factor by a finite Euler product; indeed, if {\chi = \chi' \chi_0} for some principal {\chi_0} whose modulus {q_0} is coprime to that of {\chi'}, then

\displaystyle  L(s,\chi) = L(s,\chi') \prod_{p|q_0} (1 - \frac{1}{p^s}) \ \ \ \ \ (9)

(cf. equation (45) of Notes 2).

The main new feature is that the Poisson summation formula needs to be “twisted” by a Dirichlet character {\chi}, and this boils down to the problem of understanding the finite (additive) Fourier transform of a Dirichlet character. This is achieved by the classical theory of Gauss sums, which we review below the fold. There is one new wrinkle; the value of {\chi(-1) \in \{-1,+1\}} plays a role in the functional equation. More precisely, we have

Theorem 3 (Functional equation for {L}-functions) Let {\chi} be a primitive character of modulus {q} with {q>1}. Then {L(s,\chi)} extends to an entire function on the complex plane, with

\displaystyle  L(s,\chi) = \varepsilon(\chi) 2^s \pi^{s-1} q^{1/2-s} \sin(\frac{\pi}{2}(s+\kappa)) \Gamma(1-s) L(1-s,\overline{\chi})

or equivalently

\displaystyle  L(1-s,\overline{\chi}) = \varepsilon(\overline{\chi}) 2^{1-s} \pi^{-s} q^{s-1/2} \sin(\frac{\pi}{2}(1-s+\kappa)) \Gamma(s) L(s,\chi)

for all {s}, where {\kappa} is equal to {0} in the even case {\chi(-1)=+1} and {1} in the odd case {\chi(-1)=-1}, and

\displaystyle  \varepsilon(\chi) := \frac{\tau(\chi)}{i^\kappa \sqrt{q}} \ \ \ \ \ (10)

where {\tau(\chi)} is the Gauss sum

\displaystyle  \tau(\chi) := \sum_{n \in {\bf Z}/q{\bf Z}} \chi(n) e(n/q). \ \ \ \ \ (11)

and {e(x) := e^{2\pi ix}}, with the convention that the {q}-periodic function {n \mapsto e(n/q)} is also (by abuse of notation) applied to {n} in the cyclic group {{\bf Z}/q{\bf Z}}.

From this functional equation and (2) we see that, as with the Riemann zeta function, the non-trivial zeroes of {L(s,\chi)} (defined as the zeroes within the critical strip {\{ s: 0 < \hbox{Re}(s) < 1 \}} are symmetric around the critical line (and, if {\chi} is real, are also symmetric around the real axis). In addition, {L(s,\chi)} acquires trivial zeroes at the negative even integers and at zero if {\chi(-1)=1}, and at the negative odd integers if {\chi(-1)=-1}. For imprimitive {\chi}, we see from (9) that {L(s,\chi)} also acquires some additional trivial zeroes on the left edge of the critical strip.

There is also a symmetric version of this equation, analogous to Corollary 2:

Corollary 4 Let {\chi,q,\varepsilon(\chi)} be as above, and set

\displaystyle  \xi(s,\chi) := (q/\pi)^{(s+\kappa)/2} \Gamma((s+\kappa)/2) L(s,\chi),

then {\xi(\cdot,\chi)} is entire with {\xi(1-s,\chi) = \varepsilon(\chi) \xi(s,\chi)}.

For further detail on the functional equation and its implications, I recommend the classic text of Titchmarsh or the text of Davenport.

Read the rest of this entry »

We will shortly turn to the complex-analytic approach to multiplicative number theory, which relies on the basic properties of complex analytic functions. In this supplement to the main notes, we quickly review the portions of complex analysis that we will be using in this course. We will not attempt a comprehensive review of this subject; for instance, we will completely neglect the conformal geometry or Riemann surface aspect of complex analysis, and we will also avoid using the various boundary convergence theorems for Taylor series or Dirichlet series (the latter type of result is traditionally utilised in multiplicative number theory, but I personally find them a little unintuitive to use, and will instead rely on a slightly different set of complex-analytic tools). We will also focus on the “local” structure of complex analytic functions, in particular adopting the philosophy that such functions behave locally like complex polynomials; the classical “global” theory of entire functions, while traditionally used in the theory of the Riemann zeta function, will be downplayed in these notes. On the other hand, we will play up the relationship between complex analysis and Fourier analysis, as we will incline to using the latter tool over the former in some of the subsequent material. (In the traditional approach to the subject, the Mellin transform is used in place of the Fourier transform, but we will not emphasise the role of the Mellin transform here.)

We begin by recalling the notion of a holomorphic function, which will later be shown to be essentially synonymous with that of a complex analytic function.

Definition 1 (Holomorphic function) Let {\Omega} be an open subset of {{\bf C}}, and let {f: \Omega \rightarrow {\bf C}} be a function. If {z \in {\bf C}}, we say that {f} is complex differentiable at {z} if the limit

\displaystyle  f'(z) := \lim_{h \rightarrow 0; h \in {\bf C} \backslash \{0\}} \frac{f(z+h)-f(z)}{h}

exists, in which case we refer to {f'(z)} as the (complex) derivative of {f} at {z}. If {f} is differentiable at every point {z} of {\Omega}, and the derivative {f': \Omega \rightarrow {\bf C}} is continuous, we say that {f} is holomorphic on {\Omega}.

Exercise 2 Show that a function {f: \Omega \rightarrow {\bf C}} is holomorphic if and only if the two-variable function {(x,y) \mapsto f(x+iy)} is continuously differentiable on {\{ (x,y) \in {\bf R}^2: x+iy \in \Omega\}} and obeys the Cauchy-Riemann equation

\displaystyle  \frac{\partial}{\partial x} f(x+iy) = \frac{1}{i} \frac{\partial}{\partial y} f(x+iy). \ \ \ \ \ (1)

Basic examples of holomorphic functions include complex polynomials

\displaystyle  P(z) = a_n z^n + \dots + a_1 z + a_0

as well as the complex exponential function

\displaystyle  \exp(z) := \sum_{n=0}^\infty \frac{z^n}{n!}

which are holomorphic on the entire complex plane {{\bf C}} (i.e., they are entire functions). The sum or product of two holomorphic functions is again holomorphic; the quotient of two holomorphic functions is holomorphic so long as the denominator is non-zero. Finally, the composition of two holomorphic functions is holomorphic wherever the composition is defined.

Exercise 3

  • (i) Establish Euler’s formula

    \displaystyle  \exp(x+iy) = e^x (\cos y + i \sin y)

    for all {x,y \in {\bf R}}. (Hint: it is a bit tricky to do this starting from the trigonometric definitions of sine and cosine; I recommend either using the Taylor series formulations of these functions instead, or alternatively relying on the ordinary differential equations obeyed by sine and cosine.)

  • (ii) Show that every non-zero complex number {z} has a complex logarithm {\log(z)} such that {\exp(\log(z))=z}, and that this logarithm is unique up to integer multiples of {2\pi i}.
  • (iii) Show that there exists a unique principal branch {\hbox{Log}(z)} of the complex logarithm in the region {{\bf C} \backslash (-\infty,0]}, defined by requiring {\hbox{Log}(z)} to be a logarithm of {z} with imaginary part between {-\pi} and {\pi}. Show that this principal branch is holomorphic with derivative {1/z}.

In real analysis, we have the fundamental theorem of calculus, which asserts that

\displaystyle  \int_a^b F'(t)\ dt = F(b) - F(a)

whenever {[a,b]} is a real interval and {F: [a,b] \rightarrow {\bf R}} is a continuously differentiable function. The complex analogue of this fact is that

\displaystyle  \int_\gamma F'(z)\ dz = F(\gamma(1)) - F(\gamma(0)) \ \ \ \ \ (2)

whenever {F: \Omega \rightarrow {\bf C}} is a holomorphic function, and {\gamma: [0,1] \rightarrow \Omega} is a contour in {\Omega}, by which we mean a piecewise continuously differentiable function, and the contour integral {\int_\gamma f(z)\ dz} for a continuous function {f} is defined via change of variables as

\displaystyle  \int_\gamma f(z)\ dz := \int_0^1 f(\gamma(t)) \gamma'(t)\ dt.

The complex fundamental theorem of calculus (2) follows easily from the real fundamental theorem and the chain rule.

In real analysis, we have the rather trivial fact that the integral of a continuous function on a closed contour is always zero:

\displaystyle  \int_a^b f(t)\ dt + \int_b^a f(t)\ dt = 0.

In complex analysis, the analogous fact is significantly more powerful, and is known as Cauchy’s theorem:

Theorem 4 (Cauchy’s theorem) Let {f: \Omega \rightarrow {\bf C}} be a holomorphic function in a simply connected open set {\Omega}, and let {\gamma: [0,1] \rightarrow \Omega} be a closed contour in {\Omega} (thus {\gamma(1)=\gamma(0)}). Then {\int_\gamma f(z)\ dz = 0}.

Exercise 5 Use Stokes’ theorem to give a proof of Cauchy’s theorem.

A useful reformulation of Cauchy’s theorem is that of contour shifting: if {f: \Omega \rightarrow {\bf C}} is a holomorphic function on a open set {\Omega}, and {\gamma, \tilde \gamma} are two contours in an open set {\Omega} with {\gamma(0)=\tilde \gamma(0)} and {\gamma(1) = \tilde \gamma(1)}, such that {\gamma} can be continuously deformed into {\tilde \gamma}, then {\int_\gamma f(z)\ dz = \int_{\tilde \gamma} f(z)\ dz}. A basic application of contour shifting is the Cauchy integral formula:

Theorem 6 (Cauchy integral formula) Let {f: \Omega \rightarrow {\bf C}} be a holomorphic function in a simply connected open set {\Omega}, and let {\gamma: [0,1] \rightarrow \Omega} be a closed contour which is simple (thus {\gamma} does not traverse any point more than once, with the exception of the endpoint {\gamma(0)=\gamma(1)} that is traversed twice), and which encloses a bounded region {U} in the anticlockwise direction. Then for any {z_0 \in U}, one has

\displaystyle  \int_\gamma \frac{f(z)}{z-z_0}\ dz= 2\pi i f(z_0).

Proof: Let {\varepsilon > 0} be a sufficiently small quantity. By contour shifting, one can replace the contour {\gamma} by the sum (concatenation) of three contours: a contour {\rho} from {\gamma(0)} to {z_0+\varepsilon}, a contour {C_\varepsilon} traversing the circle {\{z: |z-z_0|=\varepsilon\}} once anticlockwise, and the reversal {-\rho} of the contour {\rho} that goes from {z_0+\varepsilon} to {\gamma_0}. The contributions of the contours {\rho, -\rho} cancel each other, thus

\displaystyle \int_\gamma \frac{f(z)}{z-z_0}\ dz = \int_{C_\varepsilon} \frac{f(z)}{z-z_0}\ dz.

By a change of variables, the right-hand side can be expanded as

\displaystyle  2\pi i \int_0^1 f(z_0 + \varepsilon e^{2\pi i t})\ dt.

Sending {\varepsilon \rightarrow 0}, we obtain the claim. \Box

The Cauchy integral formula has many consequences. Specialising to the case when {\gamma} traverses a circle {\{ z: |z-z_0|=r\}} around {z_0}, we conclude the mean value property

\displaystyle  f(z_0) = \int_0^1 f(z_0 + re^{2\pi i t})\ dt \ \ \ \ \ (3)

whenever {f} is holomorphic in a neighbourhood of the disk {\{ z: |z-z_0| \leq r \}}. In a similar spirit, we have the maximum principle for holomorphic functions:

Lemma 7 (Maximum principle) Let {\Omega} be a simply connected open set, and let {\gamma} be a simple closed contour in {\Omega} enclosing a bounded region {U} anti-clockwise. Let {f: \Omega \rightarrow {\bf C}} be a holomorphic function. If we have the bound {|f(z)| \leq M} for all {z} on the contour {\gamma}, then we also have the bound {|f(z_0)| \leq M} for all {z_0 \in U}.

Proof: We use an argument of Landau. Fix {z_0 \in U}. From the Cauchy integral formula and the triangle inequality we have the bound

\displaystyle  |f(z_0)| \leq C_{z_0,\gamma} M

for some constant {C_{z_0,\gamma} > 0} depending on {z_0} and {\gamma}. This ostensibly looks like a weaker bound than what we want, but we can miraculously make the constant {C_{z_0,\gamma}} disappear by the “tensor power trick“. Namely, observe that if {f} is a holomorphic function bounded in magnitude by {M} on {\gamma}, and {n} is a natural number, then {f^n} is a holomorphic function bounded in magnitude by {M^n} on {\gamma}. Applying the preceding argument with {f, M} replaced by {f^n, M^n} we conclude that

\displaystyle  |f(z_0)|^n \leq C_{z_0,\gamma} M^n

and hence

\displaystyle  |f(z_0)| \leq C_{z_0,\gamma}^{1/n} M.

Sending {n \rightarrow \infty}, we obtain the claim. \Box

Another basic application of the integral formula is

Corollary 8 Every holomorphic function {f: \Omega \rightarrow {\bf C}} is complex analytic, thus it has a convergent Taylor series around every point {z_0} in the domain. In particular, holomorphic functions are smooth, and the derivative of a holomorphic function is again holomorphic.

Conversely, it is easy to see that complex analytic functions are holomorphic. Thus, the terms “complex analytic” and “holomorphic” are synonymous, at least when working on open domains. (On a non-open set {\Omega}, saying that {f} is analytic on {\Omega} is equivalent to asserting that {f} extends to a holomorphic function of an open neighbourhood of {\Omega}.) This is in marked contrast to real analysis, in which a function can be continuously differentiable, or even smooth, without being real analytic.

Proof: By translation, we may suppose that {z_0=0}. Let {C_r} be a a contour traversing the circle {\{ z: |z|=r\}} that is contained in the domain {\Omega}, then by the Cauchy integral formula one has

\displaystyle  f(z) = \frac{1}{2\pi i} \int_{C_r} \frac{f(w)}{w-z}\ dw

for all {z} in the disk {\{ z: |z| < r \}}. As {f} is continuously differentiable (and hence continuous) on {C_r}, it is bounded. From the geometric series formula

\displaystyle  \frac{1}{w-z} = \frac{1}{w} + \frac{1}{w^2} z + \frac{1}{w^3} z^2 + \dots

and dominated convergence, we conclude that

\displaystyle  f(z) = \sum_{n=0}^\infty (\frac{1}{2\pi i} \int_{C_r} \frac{f(w)}{w^{n+1}}\ dw) z^n

with the right-hand side an absolutely convergent series for {|z| < r}, and the claim follows. \Box

Exercise 9 Establish the generalised Cauchy integral formulae

\displaystyle  f^{(k)}(z_0) = \frac{k!}{2\pi i} \int_\gamma \frac{f(z)}{(z-z_0)^{k+1}}\ dz

for any non-negative integer {k}, where {f^{(k)}} is the {k}-fold complex derivative of {f}.

This in turn leads to a converse to Cauchy’s theorem, known as Morera’s theorem:

Corollary 10 (Morera’s theorem) Let {f: \Omega \rightarrow {\bf C}} be a continuous function on an open set {\Omega} with the property that {\int_\gamma f(z)\ dz = 0} for all closed contours {\gamma: [0,1] \rightarrow \Omega}. Then {f} is holomorphic.

Proof: We can of course assume {\Omega} to be non-empty and connected (hence path-connected). Fix a point {z_0 \in \Omega}, and define a “primitive” {F: \Omega \rightarrow {\bf C}} of {f} by defining {F(z_1) = \int_\gamma f(z)\ dz}, with {\gamma: [0,1] \rightarrow \Omega} being any contour from {z_0} to {z_1} (this is well defined by hypothesis). By mimicking the proof of the real fundamental theorem of calculus, we see that {F} is holomorphic with {F'=f}, and the claim now follows from Corollary 8. \Box

An important consequence of Morera’s theorem for us is

Corollary 11 (Locally uniform limit of holomorphic functions is holomorphic) Let {f_n: \Omega \rightarrow {\bf C}} be holomorphic functions on an open set {\Omega} which converge locally uniformly to a function {f: \Omega \rightarrow {\bf C}}. Then {f} is also holomorphic on {\Omega}.

Proof: By working locally we may assume that {\Omega} is a ball, and in particular simply connected. By Cauchy’s theorem, {\int_\gamma f_n(z)\ dz = 0} for all closed contours {\gamma} in {\Omega}. By local uniform convergence, this implies that {\int_\gamma f(z)\ dz = 0} for all such contours, and the claim then follows from Morera’s theorem. \Box

Now we study the zeroes of complex analytic functions. If a complex analytic function {f} vanishes at a point {z_0}, but is not identically zero in a neighbourhood of that point, then by Taylor expansion we see that {f} factors in a sufficiently small neighbourhood of {z_0} as

\displaystyle  f(z) = (z-z_0)^n g(z_0) \ \ \ \ \ (4)

for some natural number {n} (which we call the order or multiplicity of the zero at {f}) and some function {g} that is complex analytic and non-zero near {z_0}; this generalises the factor theorem for polynomials. In particular, the zero {z_0} is isolated if {f} does not vanish identically near {z_0}. We conclude that if {\Omega} is connected and {f} vanishes on a neighbourhood of some point {z_0} in {\Omega}, then it must vanish on all of {\Omega} (since the maximal connected neighbourhood of {z_0} in {\Omega} on which {f} vanishes cannot have any boundary point in {\Omega}). This implies unique continuation of analytic functions: if two complex analytic functions on {\Omega} agree on a non-empty open set, then they agree everywhere. In particular, if a complex analytic function does not vanish everywhere, then all of its zeroes are isolated, so in particular it has only finitely many zeroes on any given compact set.

Recall that a rational function is a function {f} which is a quotient {g/h} of two polynomials (at least outside of the set where {h} vanishes). Analogously, let us define a meromorphic function on an open set {\Omega} to be a function {f: \Omega \backslash S \rightarrow {\bf C}} defined outside of a discrete subset {S} of {\Omega} (the singularities of {f}), which is locally the quotient {g/h} of holomorphic functions, in the sense that for every {z_0 \in \Omega}, one has {f=g/h} in a neighbourhood of {z_0} excluding {S}, with {g, h} holomorphic near {z_0} and with {h} non-vanishing outside of {S}. If {z_0 \in S} and {g} has a zero of equal or higher order than {h} at {z_0}, then the singularity is removable and one can extend the meromorphic function holomorphically across {z_0} (by the holomorphic factor theorem (4)); otherwise, the singularity is non-removable and is known as a pole, whose order is equal to the difference between the order of {h} and the order of {g} at {z_0}. (If one wished, one could extend meromorphic functions to the poles by embedding {{\bf C}} in the Riemann sphere {{\bf C} \cup \{\infty\}} and mapping each pole to {\infty}, but we will not do so here. One could also consider non-meromorphic functions with essential singularities at various points, but we will have no need to analyse such singularities in this course.) If the order of a pole or zero is one, we say that it is simple; if it is two, we say it is double; and so forth.

Exercise 12 Show that the space of meromorphic functions on a non-empty open set {\Omega}, quotiented by almost everywhere equivalence, forms a field.

By quotienting two Taylor series, we see that if a meromorphic function {f} has a pole of order {n} at some point {z_0}, then it has a Laurent expansion

\displaystyle  f = \sum_{m=-n}^\infty a_m (z-z_0)^m,

absolutely convergent in a neighbourhood of {z_0} excluding {z_0} itself, and with {a_{-n}} non-zero. The Laurent coefficient {a_{-1}} has a special significance, and is called the residue of the meromorphic function {f} at {z_0}, which we will denote as {\hbox{Res}(f;z_0)}. The importance of this coefficient comes from the following significant generalisation of the Cauchy integral formula, known as the residue theorem:

Exercise 13 (Residue theorem) Let {f} be a meromorphic function on a simply connected domain {\Omega}, and let {\gamma} be a closed contour in {\Omega} enclosing a bounded region {U} anticlockwise, and avoiding all the singularities of {f}. Show that

\displaystyle  \int_\gamma f(z)\ dz = 2\pi i \sum_\rho \hbox{Res}(f;\rho)

where {\rho} is summed over all the poles of {f} that lie in {U}.

The residue theorem is particularly useful when applied to logarithmic derivatives {f'/f} of meromorphic functions {f}, because the residue is of a specific form:

Exercise 14 Let {f} be a meromorphic function on an open set {\Omega} that does not vanish identically. Show that the only poles of {f'/f} are simple poles (poles of order {1}), occurring at the poles and zeroes of {f} (after all removable singularities have been removed). Furthermore, the residue of {f'/f} at a pole {z_0} is an integer, equal to the order of zero of {f} if {f} has a zero at {z_0}, or equal to negative the order of pole at {f} if {f} has a pole at {z_0}.

Remark 15 The fact that residues of logarithmic derivatives of meromorphic functions are automatically integers is a remarkable feature of the complex analytic approach to multiplicative number theory, which is difficult (though not entirely impossible) to duplicate in other approaches to the subject. Here is a sample application of this integrality, which is challenging to reproduce by non-complex-analytic means: if {f} is meromorphic near {z_0}, and one has the bound {|\frac{f'}{f}(z_0+t)| \leq \frac{0.9}{t} + O(1)} as {t \rightarrow 0^+}, then {\frac{f'}{f}} must in fact stay bounded near {z_0}, because the only integer of magnitude less than {0.9} is zero.

Read the rest of this entry »

In the traditional foundations of probability theory, one selects a probability space {(\Omega, {\mathcal B}, {\mathbf P})}, and makes a distinction between deterministic mathematical objects, which do not depend on the sampled state {\omega \in \Omega}, and stochastic (or random) mathematical objects, which do depend (but in a measurable fashion) on the sampled state {\omega \in \Omega}. For instance, a deterministic real number would just be an element {x \in {\bf R}}, whereas a stochastic real number (or real random variable) would be a measurable function {x: \Omega \rightarrow {\bf R}}, where in this post {{\bf R}} will always be endowed with the Borel {\sigma}-algebra. (For readers familiar with nonstandard analysis, the adjectives “deterministic” and “stochastic” will be used here in a manner analogous to the uses of the adjectives “standard” and “nonstandard” in nonstandard analysis. The analogy is particularly close when comparing with the “cheap nonstandard analysis” discussed in this previous blog post. We will also use “relative to {\Omega}” as a synonym for “stochastic”.)

Actually, for our purposes we will adopt the philosophy of identifying stochastic objects that agree almost surely, so if one was to be completely precise, we should define a stochastic real number to be an equivalence class {[x]} of measurable functions {x: \Omega \rightarrow {\bf R}}, up to almost sure equivalence. However, we shall often abuse notation and write {[x]} simply as {x}.

More generally, given any measurable space {X = (X, {\mathcal X})}, we can talk either about deterministic elements {x \in X}, or about stochastic elements of {X}, that is to say equivalence classes {[x]} of measurable maps {x: \Omega \rightarrow X} up to almost sure equivalence. We will use {\Gamma(X|\Omega)} to denote the set of all stochastic elements of {X}. (For readers familiar with sheaves, it may helpful for the purposes of this post to think of {\Gamma(X|\Omega)} as the space of measurable global sections of the trivial {X}bundle over {\Omega}.) Of course every deterministic element {x} of {X} can also be viewed as a stochastic element {x|\Omega \in \Gamma(X|\Omega)} given by (the equivalence class of) the constant function {\omega \mapsto x}, thus giving an embedding of {X} into {\Gamma(X|\Omega)}. We do not attempt here to give an interpretation of {\Gamma(X|\Omega)} for sets {X} that are not equipped with a {\sigma}-algebra {{\mathcal X}}.

Remark 1 In my previous post on the foundations of probability theory, I emphasised the freedom to extend the sample space {(\Omega, {\mathcal B}, {\mathbf P})} to a larger sample space whenever one wished to inject additional sources of randomness. This is of course an important freedom to possess (and in the current formalism, is the analogue of the important operation of base change in algebraic geometry), but in this post we will focus on a single fixed sample space {(\Omega, {\mathcal B}, {\mathbf P})}, and not consider extensions of this space, so that one only has to consider two types of mathematical objects (deterministic and stochastic), as opposed to having many more such types, one for each potential choice of sample space (with the deterministic objects corresponding to the case when the sample space collapses to a point).

Any (measurable) {k}-ary operation on deterministic mathematical objects then extends to their stochastic counterparts by applying the operation pointwise. For instance, the addition operation {+: {\bf R} \times {\bf R} \rightarrow {\bf R}} on deterministic real numbers extends to an addition operation {+: \Gamma({\bf R}|\Omega) \times \Gamma({\bf R}|\Omega) \rightarrow \Gamma({\bf R}|\Omega)}, by defining the class {[x]+[y]} for {x,y: \Omega \rightarrow {\bf R}} to be the equivalence class of the function {\omega \mapsto x(\omega) + y(\omega)}; this operation is easily seen to be well-defined. More generally, any measurable {k}-ary deterministic operation {O: X_1 \times \dots \times X_k \rightarrow Y} between measurable spaces {X_1,\dots,X_k,Y} extends to an stochastic operation {O: \Gamma(X_1|\Omega) \times \dots \Gamma(X_k|\Omega) \rightarrow \Gamma(Y|\Omega)} in the obvious manner.

There is a similar story for {k}-ary relations {R: X_1 \times \dots \times X_k \rightarrow \{\hbox{true},\hbox{false}\}}, although here one has to make a distinction between a deterministic reading of the relation and a stochastic one. Namely, if we are given stochastic objects {x_i \in \Gamma(X_i|\Omega)} for {i=1,\dots,k}, the relation {R(x_1,\dots,x_k)} does not necessarily take values in the deterministic Boolean algebra {\{ \hbox{true}, \hbox{false}\}}, but only in the stochastic Boolean algebra {\Gamma(\{ \hbox{true}, \hbox{false}\}|\Omega)} – thus {R(x_1,\dots,x_k)} may be true with some positive probability and also false with some positive probability (with the event that {R(x_1,\dots,x_k)} being stochastically true being determined up to null events). Of course, the deterministic Boolean algebra embeds in the stochastic one, so we can talk about a relation {R(x_1,\dots,x_k)} being determinstically true or deterministically false, which (due to our identification of stochastic objects that agree almost surely) means that {R(x_1(\omega),\dots,x_k(\omega))} is almost surely true or almost surely false respectively. For instance given two stochastic objects {x,y}, one can view their equality relation {x=y} as having a stochastic truth value. This is distinct from the way the equality symbol {=} is used in mathematical logic, which we will now call “equality in the deterministic sense” to reduce confusion. Thus, {x=y} in the deterministic sense if and only if the stochastic truth value of {x=y} is equal to {\hbox{true}}, that is to say that {x(\omega)=y(\omega)} for almost all {\omega}.

Any universal identity for deterministic operations (or universal implication between identities) extends to their stochastic counterparts: for instance, addition is commutative, associative, and cancellative on the space of deterministic reals {{\bf R}}, and is therefore commutative, associative, and cancellative on stochastic reals {\Gamma({\bf R}|\Omega)} as well. However, one has to be more careful when working with mathematical laws that are not expressible as universal identities, or implications between identities. For instance, {{\bf R}} is an integral domain: if {x_1,x_2 \in {\bf R}} are deterministic reals such that {x_1 x_2=0}, then one must have {x_1=0} or {x_2=0}. However, if {x_1, x_2 \in \Gamma({\bf R}|\Omega)} are stochastic reals such that {x_1 x_2 = 0} (in the deterministic sense), then it is no longer necessarily the case that {x_1=0} (in the deterministic sense) or that {x_2=0} (in the deterministic sense); however, it is still true that “{x_1=0} or {x_2=0}” is true in the deterministic sense if one interprets the boolean operator “or” stochastically, thus “{x_1(\omega)=0} or {x_2(\omega)=0}” is true for almost all {\omega}. Another way to properly obtain a stochastic interpretation of the integral domain property of {{\bf R}} is to rewrite it as

\displaystyle  x_1,x_2 \in {\bf R}, x_1 x_2 = 0 \implies x_i=0 \hbox{ for some } i \in \{1,2\}

and then make all sets stochastic to obtain the true statement

\displaystyle  x_1,x_2 \in \Gamma({\bf R}|\Omega), x_1 x_2 = 0 \implies x_i=0 \hbox{ for some } i \in \Gamma(\{1,2\}|\Omega),

thus we have to allow the index {i} for which vanishing {x_i=0} occurs to also be stochastic, rather than deterministic. (A technical note: when one proves this statement, one has to select {i} in a measurable fashion; for instance, one can choose {i(\omega)} to equal {1} when {x_1(\omega)=0}, and {2} otherwise (so that in the “tie-breaking” case when {x_1(\omega)} and {x_2(\omega)} both vanish, one always selects {i(\omega)} to equal {1}).)

Similarly, the law of the excluded middle fails when interpreted deterministically, but remains true when interpreted stochastically: if {S} is a stochastic statement, then it is not necessarily the case that {S} is either deterministically true or deterministically false; however the sentence “{S} or not-{S}” is still deterministically true if the boolean operator “or” is interpreted stochastically rather than deterministically.

To avoid having to keep pointing out which operations are interpreted stochastically and which ones are interpreted deterministically, we will use the following convention: if we assert that a mathematical sentence {S} involving stochastic objects is true, then (unless otherwise specified) we mean that {S} is deterministically true, assuming that all relations used inside {S} are interpreted stochastically. For instance, if {x,y} are stochastic reals, when we assert that “Exactly one of {x < y}, {x=y}, or {x>y} is true”, then by default it is understood that the relations {<}, {=}, {>} and the boolean operator “exactly one of” are interpreted stochastically, and the assertion is that the sentence is deterministically true.

In the above discussion, the stochastic objects {x} being considered were elements of a deterministic space {X}, such as the reals {{\bf R}}. However, it can often be convenient to generalise this situation by allowing the ambient space {X} to also be stochastic. For instance, one might wish to consider a stochastic vector {v(\omega)} inside a stochastic vector space {V(\omega)}, or a stochastic edge {e} of a stochastic graph {G(\omega)}. In order to formally describe this situation within the classical framework of measure theory, one needs to place all the ambient spaces {X(\omega)} inside a measurable space. This can certainly be done in many contexts (e.g. when considering random graphs on a deterministic set of vertices, or if one is willing to work up to equivalence and place the ambient spaces inside a suitable moduli space), but is not completely natural in other contexts. For instance, if one wishes to consider stochastic vector spaces of potentially unbounded dimension (in particular, potentially larger than any given cardinal that one might specify in advance), then the class of all possible vector spaces is so large that it becomes a proper class rather than a set (even if one works up to equivalence), making it problematic to give this class the structure of a measurable space; furthermore, even once one does so, one needs to take additional care to pin down what it would mean for a random vector {\omega \mapsto v_\omega} lying in a random vector space {\omega \mapsto V_\omega} to depend “measurably” on {\omega}.

Of course, in any reasonable application one can avoid the set theoretic issues at least by various ad hoc means, for instance by restricting the dimension of all spaces involved to some fixed cardinal such as {2^{\aleph_0}}. However, the measure-theoretic issues can require some additional effort to resolve properly.

In this post I would like to describe a different way to formalise stochastic spaces, and stochastic elements of these spaces, by viewing the spaces as measure-theoretic analogue of a sheaf, but being over the probability space {\Omega} rather than over a topological space; stochastic objects are then sections of such sheaves. Actually, for minor technical reasons it is convenient to work in the slightly more general setting in which the base space {\Omega} is a finite measure space {(\Omega, {\mathcal B}, \mu)} rather than a probability space, thus {\mu(\Omega)} can take any value in {[0,+\infty)} rather than being normalised to equal {1}. This will allow us to easily localise to subevents {\Omega'} of {\Omega} without the need for normalisation, even when {\Omega'} is a null event (though we caution that the map {x \mapsto x|\Omega'} from deterministic objects {x} ceases to be injective in this latter case). We will however still continue to use probabilistic terminology. despite the lack of normalisation; thus for instance, sets {E} in {{\mathcal B}} will be referred to as events, the measure {\mu(E)} of such a set will be referred to as the probability (which is now permitted to exceed {1} in some cases), and an event whose complement is a null event shall be said to hold almost surely. It is in fact likely that almost all of the theory below extends to base spaces which are {\sigma}-finite rather than finite (for instance, by damping the measure to become finite, without introducing any further null events), although we will not pursue this further generalisation here.

The approach taken in this post is “topos-theoretic” in nature (although we will not use the language of topoi explicitly here), and is well suited to a “pointless” or “point-free” approach to probability theory, in which the role of the stochastic state {\omega \in \Omega} is suppressed as much as possible; instead, one strives to always adopt a “relative point of view”, with all objects under consideration being viewed as stochastic objects relative to the underlying base space {\Omega}. In this perspective, the stochastic version of a set is as follows.

Definition 1 (Stochastic set) Unless otherwise specified, we assume that we are given a fixed finite measure space {\Omega = (\Omega, {\mathcal B}, \mu)} (which we refer to as the base space). A stochastic set (relative to {\Omega}) is a tuple {X|\Omega = (\Gamma(X|E)_{E \in {\mathcal B}}, ((|E))_{E \subset F, E,F \in {\mathcal B}})} consisting of the following objects:

  • A set {\Gamma(X|E)} assigned to each event {E \in {\mathcal B}}; and
  • A restriction map {x \mapsto x|E} from {\Gamma(X|F)} to {\Gamma(X|E)} to each pair {E \subset F} of nested events {E,F \in {\mathcal B}}. (Strictly speaking, one should indicate the dependence on {F} in the notation for the restriction map, e.g. using {x \mapsto x|(E \leftarrow F)} instead of {x \mapsto x|E}, but we will abuse notation by omitting the {F} dependence.)

We refer to elements of {\Gamma(X|E)} as local stochastic elements of the stochastic set {X|\Omega}, localised to the event {E}, and elements of {\Gamma(X|\Omega)} as global stochastic elements (or simply elements) of the stochastic set. (In the language of sheaves, one would use “sections” instead of “elements” here, but I prefer to use the latter terminology here, for compatibility with conventional probabilistic notation, where for instance measurable maps from {\Omega} to {{\bf R}} are referred to as real random variables, rather than sections of the reals.)

Furthermore, we impose the following axioms:

  • (Category) The map {x \mapsto x|E} from {\Gamma(X|E)} to {\Gamma(X|E)} is the identity map, and if {E \subset F \subset G} are events in {{\mathcal B}}, then {((x|F)|E) = (x|E)} for all {x \in \Gamma(X|G)}.
  • (Null events trivial) If {E \in {\mathcal B}} is a null event, then the set {\Gamma(X|E)} is a singleton set. (In particular, {\Gamma(X|\emptyset)} is always a singleton set; this is analogous to the convention that {x^0=1} for any number {x}.)
  • (Countable gluing) Suppose that for each natural number {n}, one has an event {E_n \in {\mathcal B}} and an element {x_n \in \Gamma(X|E_n)} such that {x_n|(E_n \cap E_m) = x_m|(E_n \cap E_m)} for all {n,m}. Then there exists a unique {x\in \Gamma(X|\bigcup_{n=1}^\infty E_n)} such that {x_n = x|E_n} for all {n}.

If {\Omega'} is an event in {\Omega}, we define the localisation {X|\Omega'} of the stochastic set {X|\Omega} to {\Omega'} to be the stochastic set

\displaystyle X|\Omega' := (\Gamma(X|E)_{E \in {\mathcal B}; E \subset \Omega'}, ((|E))_{E \subset F \subset \Omega', E,F \in {\mathcal B}})

relative to {\Omega'}. (Note that there is no need to renormalise the measure on {\Omega'}, as we are not demanding that our base space have total measure {1}.)

The following fact is useful for actually verifying that a given object indeed has the structure of a stochastic set:

Exercise 1 Show that to verify the countable gluing axiom of a stochastic set, it suffices to do so under the additional hypothesis that the events {E_n} are disjoint. (Note that this is quite different from the situation with sheaves over a topological space, in which the analogous gluing axiom is often trivial in the disjoint case but has non-trivial content in the overlapping case. This is ultimately because a {\sigma}-algebra is closed under all Boolean operations, whereas a topology is only closed under union and intersection.)

Let us illustrate the concept of a stochastic set with some examples.

Example 1 (Discrete case) A simple case arises when {\Omega} is a discrete space which is at most countable. If we assign a set {X_\omega} to each {\omega \in \Omega}, with {X_\omega} a singleton if {\mu(\{\omega\})=0}. One then sets {\Gamma(X|E) := \prod_{\omega \in E} X_\omega}, with the obvious restriction maps, giving rise to a stochastic set {X|\Omega}. (Thus, a local element {x} of {\Gamma(X|E)} can be viewed as a map {\omega \mapsto x(\omega)} on {E} that takes values in {X_\omega} for each {\omega \in E}.) Conversely, it is not difficult to see that any stochastic set over an at most countable discrete probability space {\Omega} is of this form up to isomorphism. In this case, one can think of {X|\Omega} as a bundle of sets {X_\omega} over each point {\omega} (of positive probability) in the base space {\Omega}. One can extend this bundle interpretation of stochastic sets to reasonably nice sample spaces {\Omega} (such as standard Borel spaces) and similarly reasonable {X}; however, I would like to avoid this interpretation in the formalism below in order to be able to easily work in settings in which {\Omega} and {X} are very “large” (e.g. not separable in any reasonable sense). Note that we permit some of the {X_\omega} to be empty, thus it can be possible for {\Gamma(X|\Omega)} to be empty whilst {\Gamma(X|E)} for some strict subevents {E} of {\Omega} to be non-empty. (This is analogous to how it is possible for a sheaf to have local sections but no global sections.) As such, the space {\Gamma(X|\Omega)} of global elements does not completely determine the stochastic set {X|\Omega}; one sometimes needs to localise to an event {E} in order to see the full structure of such a set. Thus it is important to distinguish between a stochastic set {X|\Omega} and its space {\Gamma(X|\Omega)} of global elements. (As such, it is a slight abuse of the axiom of extensionality to refer to global elements of {X|\Omega} simply as “elements”, but hopefully this should not cause too much confusion.)

Example 2 (Measurable spaces as stochastic sets) Returning now to a general base space {\Omega}, any (deterministic) measurable space {X} gives rise to a stochastic set {X|\Omega}, with {\Gamma(X|E)} being defined as in previous discussion as the measurable functions from {E} to {X} modulo almost everywhere equivalence (in particular, {\Gamma(X|E)} a singleton set when {E} is null), with the usual restriction maps. The constraint of measurability on the maps {x: E \rightarrow \Omega}, together with the quotienting by almost sure equivalence, means that {\Gamma(X|E)} is now more complicated than a plain Cartesian product {\prod_{\omega \in E} X_\omega} of fibres, but this still serves as a useful first approximation to what {\Gamma(X|E)} is for the purposes of developing intuition. Indeed, the measurability constraint is so weak (as compared for instance to topological or smooth constraints in other contexts, such as sheaves of continuous or smooth sections of bundles) that the intuition of essentially independent fibres is quite an accurate one, at least if one avoids consideration of an uncountable number of objects simultaneously.

Example 3 (Extended Hilbert modules) This example is the one that motivated this post for me. Suppose that one has an extension {(\tilde \Omega, \tilde {\mathcal B}, \tilde \mu)} of the base space {(\Omega, {\mathcal B},\mu)}, thus we have a measurable factor map {\pi: \tilde \Omega \rightarrow \Omega} such that the pushforward of the measure {\tilde \mu} by {\pi} is equal to {\mu}. Then we have a conditional expectation operator {\pi_*: L^2(\tilde \Omega,\tilde {\mathcal B},\tilde \mu) \rightarrow L^2(\Omega,{\mathcal B},\mu)}, defined as the adjoint of the pullback map {\pi^*: L^2(\Omega,{\mathcal B},\mu) \rightarrow L^2(\tilde \Omega,\tilde {\mathcal B},\tilde \mu)}. As is well known, the conditional expectation operator also extends to a contraction {\pi_*: L^1(\tilde \Omega,\tilde {\mathcal B},\tilde \mu) \rightarrow L^1(\Omega,{\mathcal B}, \mu)}; by monotone convergence we may also extend {\pi_*} to a map from measurable functions from {\tilde \Omega} to the extended non-negative reals {[0,+\infty]}, to measurable functions from {\Omega} to {[0,+\infty]}. We then define the “extended Hilbert module” {L^2(\tilde \Omega|\Omega)} to be the space of functions {f \in L^2(\tilde \Omega,\tilde {\mathcal B},\tilde \mu)} with {\pi_*(|f|^2)} finite almost everywhere. This is an extended version of the Hilbert module {L^\infty_{\Omega} L^2(\tilde \Omega|\Omega)}, which is defined similarly except that {\pi_*(|f|^2)} is required to lie in {L^\infty(\Omega,{\mathcal B},\mu)}; this is a Hilbert module over {L^\infty(\Omega, {\mathcal B}, \mu)} which is of particular importance in the Furstenberg-Zimmer structure theory of measure-preserving systems. We can then define the stochastic set {L^2_\pi(\tilde \Omega)|\Omega} by setting

\displaystyle  \Gamma(L^2_\pi(\tilde \Omega)|E) := L^2( \pi^{-1}(E) | E )

with the obvious restriction maps. In the case that {\Omega,\Omega'} are standard Borel spaces, one can disintegrate {\mu'} as an integral {\mu' = \int_\Omega \nu_\omega\ d\mu(\omega)} of probability measures {\nu_\omega} (supported in the fibre {\pi^{-1}(\{\omega\})}), in which case this stochastic set can be viewed as having fibres {L^2( \tilde \Omega, \tilde {\mathcal B}, \nu_\omega )} (though if {\Omega} is not discrete, there are still some measurability conditions in {\omega} on the local and global elements that need to be imposed). However, I am interested in the case when {\Omega,\Omega'} are not standard Borel spaces (in fact, I will take them to be algebraic probability spaces, as defined in this previous post), in which case disintegrations are not available. However, it appears that the stochastic analysis developed in this blog post can serve as a substitute for the tool of disintegration in this context.

We make the remark that if {X|\Omega} is a stochastic set and {E, F} are events that are equivalent up to null events, then one can identify {\Gamma(X|E)} with {\Gamma(X|F)} (through their common restriction to {\Gamma(X|(E \cap F))}, with the restriction maps now being bijections). As such, the notion of a stochastic set does not require the full structure of a concrete probability space {(\Omega, {\mathcal B}, {\mathbf P})}; one could also have defined the notion using only the abstract {\sigma}-algebra consisting of {{\mathcal B}} modulo null events as the base space, or equivalently one could define stochastic sets over the algebraic probability spaces defined in this previous post. However, we will stick with the classical formalism of concrete probability spaces here so as to keep the notation reasonably familiar.

As a corollary of the above observation, we see that if the base space {\Omega} has total measure {0}, then all stochastic sets are trivial (they are just points).

Exercise 2 If {X|\Omega} is a stochastic set, show that there exists an event {\Omega'} with the property that for any event {E}, {\Gamma(X|E)} is non-empty if and only if {E} is contained in {\Omega'} modulo null events. (In particular, {\Omega'} is unique up to null events.) Hint: consider the numbers {\mu( E )} for {E} ranging over all events with {\Gamma(X|E)} non-empty, and form a maximising sequence for these numbers. Then use all three axioms of a stochastic set.

One can now start take many of the fundamental objects, operations, and results in set theory (and, hence, in most other categories of mathematics) and establish analogues relative to a finite measure space. Implicitly, what we will be doing in the next few paragraphs is endowing the category of stochastic sets with the structure of an elementary topos. However, to keep things reasonably concrete, we will not explicitly emphasise the topos-theoretic formalism here, although it is certainly lurking in the background.

Firstly, we define a stochastic function {f: X|\Omega \rightarrow Y|\Omega} between two stochastic sets {X|\Omega, Y|\Omega} to be a collection of maps {f: \Gamma(X|E) \rightarrow \Gamma(Y|E)} for each {E \in {\mathcal B}} which form a natural transformation in the sense that {f(x|E) = f(x)|E} for all {x \in \Gamma(X|F)} and nested events {E \subset F}. In the case when {\Omega} is discrete and at most countable (and after deleting all null points), a stochastic function is nothing more than a collection of functions {f_\omega: X_\omega \rightarrow Y_\omega} for each {\omega \in \Omega}, with the function {f: \Gamma(X|E) \rightarrow \Gamma(Y|E)} then being a direct sum of the factor functions {f_\omega}:

\displaystyle  f( (x_\omega)_{\omega \in E} ) = ( f_\omega(x_\omega) )_{\omega \in E}.

Thus (in the discrete, at most countable setting, at least) stochastic functions do not mix together information from different states {\omega} in a sample space; the value of {f(x)} at {\omega} depends only on the value of {x} at {\omega}. The situation is a bit more subtle for continuous probability spaces, due to the identification of stochastic objects that agree almost surely, nevertheness it is still good intuition to think of stochastic functions as essentially being “pointwise” or “local” in nature.

One can now form the stochastic set {\hbox{Hom}(X \rightarrow Y)|\Omega} of functions from {X|\Omega} to {Y|\Omega}, by setting {\Gamma(\hbox{Hom}(X \rightarrow Y)|E)} for any event {E} to be the set of local stochastic functions {f: X|E \rightarrow Y|E} of the localisations of {X|\Omega, Y|\Omega} to {E}; this is a stochastic set if we use the obvious restriction maps. In the case when {\Omega} is discrete and at most countable, the fibre {\hbox{Hom}(X \rightarrow Y)_\omega} at a point {\omega} of positive measure is simply the set {Y_\omega^{X_\omega}} of functions from {X_\omega} to {Y_\omega}.

In a similar spirit, we say that one stochastic set {Y|\Omega} is a (stochastic) subset of another {X|\Omega}, and write {Y|\Omega \subset X|\Omega}, if we have a stochastic inclusion map, thus {\Gamma(Y|E) \subset \Gamma(X|E)} for all events {E}, with the restriction maps being compatible. We can then define the power set {2^X|\Omega} of a stochastic set {X|\Omega} by setting {\Gamma(2^X|E)} for any event {E} to be the set of all stochastic subsets {Y|E} of {X|E} relative to {E}; it is easy to see that {2^X|\Omega} is a stochastic set with the obvious restriction maps (one can also identify {2^X|\Omega} with {\hbox{Hom}(X, \{\hbox{true},\hbox{false}\})|\Omega} in the obvious fashion). Again, when {\Omega} is discrete and at most countable, the fibre of {2^X|\Omega} at a point {\omega} of positive measure is simply the deterministic power set {2^{X_\omega}}.

Note that if {f: X|\Omega \rightarrow Y|\Omega} is a stochastic function and {Y'|\Omega} is a stochastic subset of {Y|\Omega}, then the inverse image {f^{-1}(Y')|\Omega}, defined by setting {\Gamma(f^{-1}(Y')|E)} for any event {E} to be the set of those {x \in \Gamma(X|E)} with {f(x) \in \Gamma(Y'|E)}, is a stochastic subset of {X|\Omega}. In particular, given a {k}-ary relation {R: X_1 \times \dots \times X_k|\Omega \rightarrow \{\hbox{true}, \hbox{false}\}|\Omega}, the inverse image {R^{-1}( \{ \hbox{true} \}|\Omega )} is a stochastic subset of {X_1 \times \dots \times X_k|\Omega}, which by abuse of notation we denote as

\displaystyle  \{ (x_1,\dots,x_k) \in X_1 \times \dots \times X_k: R(x_1,\dots,x_k) \hbox{ is true} \}|\Omega.

In a similar spirit, if {X'|\Omega} is a stochastic subset of {X|\Omega} and {f: X|\Omega \rightarrow Y|\Omega} is a stochastic function, we can define the image {f(X')|\Omega} by setting {\Gamma(f(X')|E)} to be the set of those {f(x)} with {x \in \Gamma(X'|E)}; one easily verifies that this is a stochastic subset of {Y|\Omega}.

Remark 2 One should caution that in the definition of the subset relation {Y|\Omega \subset X|\Omega}, it is important that {\Gamma(Y|E) \subset \Gamma(X|E)} for all events {E}, not just the global event {\Omega}; in particular, just because a stochastic set {X|\Omega} has no global sections, does not mean that it is contained in the stochastic empty set {\emptyset|\Omega}.

Now we discuss Boolean operations on stochastic subsets of a given stochastic set {X|\Omega}. Given two stochastic subsets {X_1|\Omega, X_2|\Omega} of {X|\Omega}, the stochastic intersection {(X_1 \cap X_2)|\Omega} is defined by setting {\Gamma((X_1 \cap X_2)|E)} to be the set of {x \in \Gamma(X|E)} that lie in both {\Gamma(X_1|E)} and {\Gamma(X_2|E)}:

\displaystyle  \Gamma(X_1 \cap X_2)|E) := \Gamma(X_1|E) \cap \Gamma(X_2|E).

This is easily verified to again be a stochastic subset of {X|\Omega}. More generally one may define stochastic countable intersections {(\bigcap_{n=1}^\infty X_n)|\Omega} for any sequence {X_n|\Omega} of stochastic subsets of {X|\Omega}. One could extend this definition to uncountable families if one wished, but I would advise against it, because some of the usual laws of Boolean algebra (e.g. the de Morgan laws) may break down in this setting.

Stochastic unions are a bit more subtle. The set {\Gamma((X_1 \cup X_2)|E)} should not be defined to simply be the union of {\Gamma(X_1|E)} and {\Gamma(X_2|E)}, as this would not respect the gluing axiom. Instead, we define {\Gamma((X_1 \cup X_2)|E)} to be the set of all {x \in \Gamma(X|E)} such that one can cover {E} by measurable subevents {E_1,E_2} such that {x_i|E_i \in \Gamma(X_i|E_i)} for {i=1,2}; then {(X_1 \cup X_2)|\Omega} may be verified to be a stochastic subset of {X|\Omega}. Thus for instance {\{0,1\}|\Omega} is the stochastic union of {\{0\}|\Omega} and {\{1\}|\Omega}. Similarly for countable unions {(\bigcup_{n=1}^\infty X_n)|\Omega} of stochastic subsets {X_n|\Omega} of {X|\Omega}, although for uncountable unions are extremely problematic (they are disliked by both the measure theory and the countable gluing axiom) and will not be defined here. Finally, the stochastic difference set {\Gamma((X_1 \backslash X_2)|E)} is defined as the set of all {x|E} in {\Gamma(X_1|E)} such that {x|F \not \in \Gamma(X_2|F)} for any subevent {F} of {E} of positive probability. One may verify that in the case when {\Omega} is discrete and at most countable, these Boolean operations correspond to the classical Boolean operations applied separately to each fibre {X_{i,\omega}} of the relevant sets {X_i}. We also leave as an exercise to the reader to verify the usual laws of Boolean arithmetic, e.g. the de Morgan laws, provided that one works with at most countable unions and intersections.

One can also consider a stochastic finite union {(\bigcup_{n=1}^N X_n)|\Omega} in which the number {N} of sets in the union is itself stochastic. More precisely, let {X|\Omega} be a stochastic set, let {N \in {\bf N}|\Omega} be a stochastic natural number, and let {n \mapsto X_n|\Omega} be a stochastic function from the stochastic set {\{ n \in {\bf N}: n \leq N\}|\Omega} (defined by setting {\Gamma(\{n \in {\bf N}: n\leq N\}|E) := \{ n \in {\bf N}|E: n \leq N|E\}})) to the stochastic power set {2^X|\Omega}. Here we are considering {0} to be a natural number, to allow for unions that are possibly empty, with {{\bf N}_+ := {\bf N} \backslash \{0\}} used for the positive natural numbers. We also write {(X_n)_{n=1}^N|\Omega} for the stochastic function {n \mapsto X_n|\Omega}. Then we can define the stochastic union {\bigcup_{n=1}^N X_n|\Omega} by setting {\Gamma(\bigcup_{n=1}^N X_n|E)} for an event {E} to be the set of local elements {x \in \Gamma(X|E)} with the property that there exists a covering of {E} by measurable subevents {E_{n_0}} for {n_0 \in {\bf N}_+}, such that one has {n_0 \leq N|E_{n_0}} and {x|E_{n_0} \in \Gamma(X_{n_0}|E_{n_0})}. One can verify that {\bigcup_{n=1}^N X_n|\Omega} is a stochastic set (with the obvious restriction maps). Again, in the model case when {\Omega} is discrete and at most countable, the fibre {(\bigcup_{n=1}^N X_n)_\omega} is what one would expect it to be, namely {\bigcup_{n=1}^{N(\omega)} (X_n)_\omega}.

The Cartesian product {(X \times Y)|\Omega} of two stochastic sets may be defined by setting {\Gamma((X \times Y)|E) := \Gamma(X|E) \times \Gamma(Y|E)} for all events {E}, with the obvious restriction maps; this is easily seen to be another stochastic set. This lets one define the concept of a {k}-ary operation {f: (X_1 \times \dots \times X_k)|\Omega \rightarrow Y|\Omega} from {k} stochastic sets {X_1,\dots,X_k} to another stochastic set {Y}, or a {k}-ary relation {R: (X_1 \times \dots \times X_k)|\Omega \rightarrow \{\hbox{true}, \hbox{false}\}|\Omega}. In particular, given {x_i \in X_i|\Omega} for {i=1,\dots,k}, the relation {R(x_1,\dots,x_k)} may be deterministically true, deterministically false, or have some other stochastic truth value.

Remark 3 In the degenerate case when {\Omega} is null, stochastic logic becomes a bit weird: all stochastic statements are deterministically true, as are their stochastic negations, since every event in {\Omega} (even the empty set) now holds with full probability. Among other pathologies, the empty set now has a global element over {\Omega} (this is analogous to the notorious convention {0^0=1}), and any two deterministic objects {x,y} become equal over {\Omega}: {x|\Omega=y|\Omega}.

The following simple observation is crucial to subsequent discussion. If {(x_n)_{n \in {\bf N}_+}} is a sequence taking values in the global elements {\Gamma(X|\Omega)} of a stochastic space {X|\Omega}, then we may also define global elements {x_n \in \Gamma(X|\Omega)} for stochastic indices {n \in {\bf N}_+|\Omega} as well, by appealing to the countable gluing axiom to glue together {x_{n_0}} restricted to the set {\{ \omega \in \Omega: n(\omega) = n_0\}} for each deterministic natural number {n_0} to form {x_n}. With this definition, the map {n \mapsto x_n} is a stochastic function from {{\bf N}_+|\Omega} to {X|\Omega}; indeed, this creates a one-to-one correspondence between external sequences (maps {n \mapsto x_n} from {{\bf N}_+} to {\Gamma(X|\Omega)}) and stochastic sequences (stochastic functions {n \mapsto x_n} from {{\bf N}_+|\Omega} to {X|\Omega}). Similarly with {{\bf N}_+} replaced by any other at most countable set. This observation will be important in allowing many deterministic arguments involving sequences will be able to be carried over to the stochastic setting.

We now specialise from the extremely broad discipline of set theory to the more focused discipline of real analysis. There are two fundamental axioms that underlie real analysis (and in particular distinguishes it from real algebra). The first is the Archimedean property, which we phrase in the “no infinitesimal” formulation as follows:

Proposition 2 (Archimedean property) Let {x \in {\bf R}} be such that {x \leq 1/n} for all positive natural numbers {n}. Then {x \leq 0}.

The other is the least upper bound axiom:

Proposition 3 (Least upper bound axiom) Let {S} be a non-empty subset of {{\bf R}} which has an upper bound {M \in {\bf R}}, thus {x \leq M} for all {x \in S}. Then there exists a unique real number {\sup S \in {\bf R}} with the following properties:

  • {x \leq \sup S} for all {x \in S}.
  • For any real {L < \sup S}, there exists {x \in S} such that {L < x \leq \sup S}.
  • {\sup S \leq M}.

Furthermore, {\sup S} does not depend on the choice of {M}.

The Archimedean property extends easily to the stochastic setting:

Proposition 4 (Stochastic Archimedean property) Let {x \in \Gamma({\bf R}|\Omega)} be such that {x \leq 1/n} for all deterministic natural numbers {n}. Then {x \leq 0}.

Remark 4 Here, incidentally, is one place in which this stochastic formalism deviates from the nonstandard analysis formalism, as the latter certainly permits the existence of infinitesimal elements. On the other hand, we caution that stochastic real numbers are permitted to be unbounded, so that formulation of Archimedean property is not valid in the stochastic setting.

The proof is easy and is left to the reader. The least upper bound axiom also extends nicely to the stochastic setting, but the proof requires more work (in particular, our argument uses the monotone convergence theorem):

Theorem 5 (Stochastic least upper bound axiom) Let {S|\Omega} be a stochastic subset of {{\bf R}|\Omega} which has a global upper bound {M \in {\bf R}|\Omega}, thus {x \leq M} for all {x \in \Gamma(S|\Omega)}, and is globally non-empty in the sense that there is at least one global element {x \in \Gamma(S|\Omega)}. Then there exists a unique stochastic real number {\sup S \in \Gamma({\bf R}|\Omega)} with the following properties:

  • {x \leq \sup S} for all {x \in \Gamma(S|\Omega)}.
  • For any stochastic real {L < \sup S}, there exists {x \in \Gamma(S|\Omega)} such that {L < x \leq \sup S}.
  • {\sup S \leq M}.

Furthermore, {\sup S} does not depend on the choice of {M}.

For future reference, we note that the same result holds with {{\bf R}} replaced by {{\bf N} \cup \{+\infty\}} throughout, since the latter may be embedded in the former, for instance by mapping {n} to {1 - \frac{1}{n+1}} and {+\infty} to {1}. In applications, the above theorem serves as a reasonable substitute for the countable axiom of choice, which does not appear to hold in unrestricted generality relative to a measure space; in particular, it can be used to generate various extremising sequences for stochastic functionals on various stochastic function spaces.

Proof: Uniqueness is clear (using the Archimedean property), as well as the independence on {M}, so we turn to existence. By using an order-preserving map from {{\bf R}} to {(-1,1)} (e.g. {x \mapsto \frac{2}{\pi} \hbox{arctan}(x)}) we may assume that {S|\Omega} is a subset of {(-1,1)|\Omega}, and that {M < 1}.

We observe that {\Gamma(S|\Omega)} is a lattice: if {x, y \in \Gamma(S|\Omega)}, then {\max(x,y)} and {\min(x,y)} also lie in {\Gamma(S|\Omega)}. Indeed, {\max(x,y)} may be formed by appealing to the countable gluing axiom to glue {y} (restricted the set {\{ \omega \in \Omega: x(\omega) < y(\omega) \}}) with {x} (restricted to the set {\{ \omega \in \Omega: x(\omega) \geq y(\omega) \}}), and similarly for {\min(x,y)}. (Here we use the fact that relations such as {<} are Borel measurable on {{\bf R}}.)

Let {A \in {\bf R}} denote the deterministic quantity

\displaystyle  A := \sup \{ \int_\Omega x(\omega)\ d\mu(\omega): x \in \Gamma(S|\Omega) \}

then (by Proposition 3!) {A} is well-defined; here we use the hypothesis that {\mu(\Omega)} is finite. Thus we may find a sequence {(x_n)_{n \in {\bf N}}} of elements {x_n} of {\Gamma(S|\Omega)} such that

\displaystyle  \int_\Omega x_n(\omega)\ d\mu(\omega) \rightarrow A \hbox{ as } n \rightarrow \infty. \ \ \ \ \ (1)

Using the lattice property, we may assume that the {x_n} are non-decreasing: {x_n \leq x_m} whenever {n \leq m}. If we then define {\sup S(\omega) := \sup_n x_n(\omega)} (after choosing measurable representatives of each equivalence class {x_n}), then {\sup S} is a stochastic real with {\sup S \leq M}.

If {x \in \Gamma(S|\Omega)}, then {\max(x,x_n) \in \Gamma(S|\Omega)}, and so

\displaystyle  \int_\Omega \max(x,x_n)\ d\mu(\omega) \leq A.

From this and (1) we conclude that

\displaystyle  \int_\Omega \max(x-x_n,0) \rightarrow 0 \hbox{ as } n \rightarrow \infty.

From monotone convergence, we conclude that

\displaystyle  \int_\Omega \max(x-\sup S,0) = 0

and so {x \leq \sup S}, as required.

Now let {L < \sup S} be a stochastic real. After choosing measurable representatives of each relevant equivalence class, we see that for almost every {\omega \in \Omega}, we can find a natural number {n(\omega)} with {x_{n(\omega)} > L}. If we choose {n(\omega)} to be the first such positive natural number when it exists, and (say) {1} otherwise, then {n} is a stochastic positive natural number and {L < x_n}. The claim follows. \Box

Remark 5 One can abstract away the role of the measure {\mu} here, leaving only the ideal of null sets. The property that the measure is finite is then replaced by the more general property that given any non-empty family of measurable sets, there is an at most countable union of sets in that family that is an upper bound modulo null sets for all elements in that faily.

Using Proposition 4 and Theorem 5, one can then revisit many of the other foundational results of deterministic real analysis, and develop stochastic analogues; we give some examples of this below the fold (focusing on the Heine-Borel theorem and a case of the spectral theorem). As an application of this formalism, we revisit some of the Furstenberg-Zimmer structural theory of measure-preserving systems, particularly that of relatively compact and relatively weakly mixing systems, and interpret them in this framework, basically as stochastic versions of compact and weakly mixing systems (though with the caveat that the shift map is allowed to act non-trivially on the underlying probability space). As this formalism is “point-free”, in that it avoids explicit use of fibres and disintegrations, it will be well suited for generalising this structure theory to settings in which the underlying probability spaces are not standard Borel, and the underlying groups are uncountable; I hope to discuss such generalisations in future blog posts.

Remark 6 Roughly speaking, stochastic real analysis can be viewed as a restricted subset of classical real analysis in which all operations have to be “measurable” with respect to the base space. In particular, indiscriminate application of the axiom of choice is not permitted, and one should largely restrict oneself to performing countable unions and intersections rather than arbitrary unions or intersections. Presumably one can formalise this intuition with a suitable “countable transfer principle”, but I was not able to formulate a clean and general principle of this sort, instead verifying various assertions about stochastic objects by hand rather than by direct transfer from the deterministic setting. However, it would be desirable to have such a principle, since otherwise one is faced with the tedious task of redoing all the foundations of real analysis (or whatever other base theory of mathematics one is going to be working in) in the stochastic setting by carefully repeating all the arguments.

More generally, topos theory is a good formalism for capturing precisely the informal idea of performing mathematics with certain operations, such as the axiom of choice, the law of the excluded middle, or arbitrary unions and intersections, being somehow “prohibited” or otherwise “restricted”.

Read the rest of this entry »

Due to some requests, I’m uploading to my blog the slides for my recent talk in Segovia (for the birthday conference of Michael Cowling) on “Hilbert’s fifth problem and approximate groups“.  The slides cover essentially the same range of topics in this series of lecture notes, or in this text of mine, though of course in considerably less detail, given that the slides are meant to be presented in an hour.

Archives

RSS Google+ feed

  • An error has occurred; the feed is probably down. Try again later.
Follow

Get every new post delivered to your Inbox.

Join 4,886 other followers