You are currently browsing the tag archive for the ‘Ornstein-Uhlenbeck process’ tag.
[These are notes intended mostly for myself, as these topics are useful in random matrix theory, but may be of interest to some readers also. -T.]
One of the most fundamental partial differential equations in mathematics is the heat equation
where is a scalar function of both time and space, and is the Laplacian . For the purposes of this post, we will ignore all technical issues of regularity and decay, and always assume that the solutions to equations such as (1) have all the regularity and decay in order to justify all formal operations such as the chain rule, integration by parts, or differentiation under the integral sign. The factor of in the definition of the heat propagator is of course an arbitrary normalisation, chosen for some minor technical reasons; one can certainly continue the discussion below with other choices of normalisations if desired.
In probability theory, this equation takes on particular significance when is restricted to be non-negative, and furthermore to be a probability measure at each time, in the sense that
for all . (Actually, it suffices to verify this constraint at time , as the heat equation (1) will then preserve this constraint.) Indeed, in this case, one can interpret as the probability distribution of a Brownian motion
where is a stochastic process with initial probability distribution ; see for instance this previous blog post for more discussion.
defined for any , which represents the distribution of Brownian motion of a particle starting at the origin at time . At time , represents an -valued random variable, each coefficient of which is an independent random variable of mean zero and variance . (As , converges in the sense of distributions to a Dirac mass at the origin.)
The heat equation can also be viewed as the gradient flow for the Dirichlet form
for all smooth, rapidly decreasing , which formally implies that is (half of) the negative gradient of the Dirichlet energy with respect to the inner product. Among other things, this implies that the Dirichlet energy decreases in time:
For instance, for the fundamental solution (3), one can verify for any time that
Since is non-negative, the formula (6) implies that is integrable in time, and in particular we see that converges to zero as , in some averaged sense at least; similarly, (8) suggests that also converges to zero. This suggests that converges to a constant function; but as is also supposed to decay to zero at spatial infinity, we thus expect solutions to the heat equation in to decay to zero in some sense as . However, the decay is only expected to be polynomial in nature rather than exponential; for instance, the solution (3) decays in the norm like .
for any function .
There are other quantities relating to that also decrease in time under heat flow, particularly in the important case when is a probability measure. In this case, it is natural to introduce the entropy
Thus, for instance, if is the uniform distribution on some measurable subset of of finite measure , the entropy would be . Intuitively, as the entropy decreases, the probability distribution gets wider and flatter. For instance, in the case of the fundamental solution (3), one has for any , reflecting the fact that is approximately uniformly distributed on a ball of radius (and thus of measure ).
A short formal computation shows (if one assumes for simplicity that is strictly positive, which is not an unreasonable hypothesis, particularly in view of the strong maximum principle) using (9), (5) that
In particular, the entropy is decreasing, which corresponds well to one’s intuition that the heat equation (or Brownian motion) should serve to spread out a probability distribution over time.
Actually, one can say more: the rate of decrease of the entropy is itself decreasing, or in other words the entropy is convex. I do not have a satisfactorily intuitive reason for this phenomenon, but it can be proved by straightforward application of basic several variable calculus tools (such as the chain rule, product rule, quotient rule, and integration by parts), and completing the square. Namely, by using the chain rule we have
valid for for any smooth function , we see from (1) that
and thus (again assuming that , and hence , is strictly positive to avoid technicalities)
We thus have
It is now convenient to compute using the Einstein summation convention to hide the summation over indices . We have
By integration by parts and interchanging partial derivatives, we may write the first integral as
and from the quotient and product rules, we may write the second integral as
Gathering terms, completing the square, and making the summations explicit again, we see that
and so in particular is always decreasing.
The above identity can also be written as
Exercise 1 Give an alternate proof of the above identity by writing , and deriving the equation for .
It was observed in a well known paper of Bakry and Emery that the above monotonicity properties hold for a much larger class of heat flow-type equations, and lead to a number of important relations between energy and entropy, such as the log-Sobolev inequality of Gross and of Federbush, and the hypercontractivity inequality of Nelson; we will discuss one such family of generalisations (or more precisely, variants) below the fold.
One theme in this course will be the central nature played by the gaussian random variables . Gaussians have an incredibly rich algebraic structure, and many results about general random variables can be established by first using this structure to verify the result for gaussians, and then using universality techniques (such as the Lindeberg exchange strategy) to extend the results to more general variables.
One way to exploit this algebraic structure is to continuously deform the variance from an initial variance of zero (so that the random variable is deterministic) to some final level . We would like to use this to give a continuous family of random variables as (viewed as a “time” parameter) runs from to .
At present, we have not completely specified what should be, because we have only described the individual distribution of each , and not the joint distribution. However, there is a very natural way to specify a joint distribution of this type, known as Brownian motion. In these notes we lay the necessary probability theory foundations to set up this motion, and indicate its connection with the heat equation, the central limit theorem, and the Ornstein-Uhlenbeck process. This is the beginning of stochastic calculus, which we will not develop fully here.
We will begin with one-dimensional Brownian motion, but it is a simple matter to extend the process to higher dimensions. In particular, we can define Brownian motion on vector spaces of matrices, such as the space of Hermitian matrices. This process is equivariant with respect to conjugation by unitary matrices, and so we can quotient out by this conjugation and obtain a new process on the quotient space, or in other words on the spectrum of Hermitian matrices. This process is called Dyson Brownian motion, and turns out to have a simple description in terms of ordinary Brownian motion; it will play a key role in several of the subsequent notes in this course.
Let be iid copies of an absolutely integrable real scalar random variable , and form the partial sums . As we saw in the last set of notes, the law of large numbers ensures that the empirical averages converge (both in probability and almost surely) to a deterministic limit, namely the mean of the reference variable . Furthermore, under some additional moment hypotheses on the underlying variable , we can obtain square root cancellation for the fluctuation of the empirical average from the mean. To simplify the calculations, let us first restrict to the case of mean zero and variance one, thus
Then, as computed in previous notes, the normalised fluctuation also has mean zero and variance one:
This and Chebyshev’s inequality already indicates that the “typical” size of is , thus for instance goes to zero in probability for any that goes to infinity as . If we also have a finite fourth moment , then the calculations of the previous notes also give a fourth moment estimate
From this and the Paley-Zygmund inequality (Exercise 42 of Notes 1) we also get some lower bound for of the form
for some absolute constant and for sufficiently large; this indicates in particular that does not converge in any reasonable sense to something finite for any that goes to infinity.
The question remains as to what happens to the ratio itself, without multiplying or dividing by any factor . A first guess would be that these ratios converge in probability or almost surely, but this is unfortunately not the case:
Proposition 1 Let be iid copies of an absolutely integrable real scalar random variable with mean zero, variance one, and finite fourth moment, and write . Then the random variables do not converge in probability or almost surely to any limit, and neither does any subsequence of these random variables.
Proof: Suppose for contradiction that some sequence converged in probability or almost surely to a limit . By passing to a further subsequence we may assume that the convergence is in the almost sure sense. Since all of the have mean zero, variance one, and bounded fourth moment, Theorem 24 of Notes 1 implies that the limit also has mean zero and variance one. On the other hand, is a tail random variable and is thus almost surely constant by the Kolmogorov zero-one law from Notes 3. Since constants have variance zero, we obtain the required contradiction.
Nevertheless there is an important limit for the ratio , which requires one to replace the notions of convergence in probability or almost sure convergence by the weaker concept of convergence in distribution.
Definition 2 (Vague convergence and convergence in distribution) Let be a locally compact Hausdorff topological space with the Borel -algebra. A sequence of finite measures on is said to converge vaguely to another finite measure if one has
as for all continuous compactly supported functions . (Vague convergence is also known as weak convergence, although strictly speaking the terminology weak-* convergence would be more accurate.) A sequence of random variables taking values in is said to converge in distribution (or converge weakly or converge in law) to another random variable if the distributions converge vaguely to the distribution , or equivalently if
as for all continuous compactly supported functions .
One could in principle try to extend this definition beyond the locally compact Hausdorff setting, but certain pathologies can occur when doing so (e.g. failure of the Riesz representation theorem), and we will never need to consider vague convergence in spaces that are not locally compact Hausdorff, so we restrict to this setting for simplicity.
Note that the notion of convergence in distribution depends only on the distribution of the random variables involved. One consequence of this is that convergence in distribution does not produce unique limits: if converges in distribution to , and has the same distribution as , then also converges in distribution to . However, limits are unique up to equivalence in distribution (this is a consequence of the Riesz representation theorem, discussed for instance in this blog post). As a consequence of the insensitivity of convergence in distribution to equivalence in distribution, we may also legitimately talk about convergence of distribution of a sequence of random variables to another random variable even when all the random variables and involved are being modeled by different probability spaces (e.g. each is modeled by , and is modeled by , with no coupling presumed between these spaces). This is in contrast to the stronger notions of convergence in probability or almost sure convergence, which require all the random variables to be modeled by a common probability space. Also, by an abuse of notation, we can say that a sequence of random variables converges in distribution to a probability measure , when converges vaguely to . Thus we can talk about a sequence of random variables converging in distribution to a uniform distribution, a gaussian distribution, etc..
From the dominated convergence theorem (available for both convergence in probability and almost sure convergence) we see that convergence in probability or almost sure convergence implies convergence in distribution. The converse is not true, due to the insensitivity of convergence in distribution to equivalence in distribution; for instance, if are iid copies of a non-deterministic scalar random variable , then the trivially converge in distribution to , but will not converge in probability or almost surely (as one can see from the zero-one law). However, there are some partial converses that relate convergence in distribution to convergence in probability; see Exercise 10 below.
Remark 3 The notion of convergence in distribution is somewhat similar to the notion of convergence in the sense of distributions that arises in distribution theory (discussed for instance in this previous blog post), however strictly speaking the two notions of convergence are distinct and should not be confused with each other, despite the very similar names.
The notion of convergence in distribution simplifies in the case of real scalar random variables:
- (i) converges in distribution to .
- (ii) converges to for each continuity point of (i.e. for all real numbers at which is continuous). Here is the cumulative distribution function of .
Proof: First suppose that converges in distribution to , and is continuous at . For any , one can find a such that
for every . One can also find an larger than such that and . Thus
Let be a continuous function supported on that equals on . Then by the above discussion we have
for large enough . In particular
A similar argument, replacing with a continuous function supported on that equals on gives
for large enough. Putting the two estimates together gives
for large enough; sending , we obtain the claim.
Conversely, suppose that converges to at every continuity point of . Let be a continuous compactly supported function, then it is uniformly continuous. As is monotone increasing, it can only have countably many points of discontinuity. From these two facts one can find, for any , a simple function for some that are points of continuity of , and real numbers , such that for all . Thus
Similarly for replaced by . Subtracting and taking limit superior, we conclude that
and on sending , we obtain that converges in distribution to as claimed.
The restriction to continuity points of is necessary. Consider for instance the deterministic random variables , then converges almost surely (and hence in distribution) to , but does not converge to .
Example 5 For any natural number , let be a discrete random variable drawn uniformly from the finite set , and let be the continuous random variable drawn uniformly from . Then converges in distribution to . Thus we see that a continuous random variable can emerge as the limit of discrete random variables.
Example 6 For any natural number , let be a continuous random variable drawn uniformly from , then converges in distribution to the deterministic real number . Thus we see that discrete (or even deterministic) random variables can emerge as the limit of continuous random variables.
Exercise 7 (Portmanteau theorem) Show that the properties (i) and (ii) in Proposition 4 are also equivalent to the following three statements:
- (iii) One has for all closed sets .
- (iv) One has for all open sets .
- (v) For any Borel set whose topological boundary is such that , one has .
(Note: to prove this theorem, you may wish to invoke Urysohn’s lemma. To deduce (iii) from (i), you may wish to start with the case of compact .)
We can now state the famous central limit theorem:
Theorem 8 (Central limit theorem) Let be iid copies of a scalar random variable of finite mean and finite non-zero variance . Let . Then the random variables converges in distribution to a random variable with the standard normal distribution (that is to say, a random variable with probability density function ). Thus, by abuse of notation
In the normalised case when has mean zero and unit variance, this simplifies to
Using Proposition 4 (and the fact that the cumulative distribution function associated to is continuous, the central limit theorem is equivalent to asserting that
as for any , or equivalently that
Informally, one can think of the central limit theorem as asserting that approximately behaves like it has distribution for large , where is the normal distribution with mean and variance , that is to say the distribution with probability density function . The integrals can be written in terms of the error function as .
The central limit theorem is a basic example of the universality phenomenon in probability – many statistics involving a large system of many independent (or weakly dependent) variables (such as the normalised sums ) end up having a universal asymptotic limit (in this case, the normal distribution), regardless of the precise makeup of the underlying random variable that comprised that system. Indeed, the universality of the normal distribution is such that it arises in many other contexts than the fluctuation of iid random variables; the central limit theorem is merely the first place in probability theory where it makes a prominent appearance.
We will give several proofs of the central limit theorem in these notes; each of these proofs has their advantages and disadvantages, and can each extend to prove many further results beyond the central limit theorem. We first give Lindeberg’s proof of the central limit theorem, based on exchanging (or swapping) each component of the sum in turn. This proof gives an accessible explanation as to why there should be a universal limit for the central limit theorem; one then computes directly with gaussians to verify that it is the normal distribution which is the universal limit. Our second proof is the most popular one taught in probability texts, namely the Fourier-analytic proof based around the concept of the characteristic function of a real random variable . Thanks to the powerful identities and other results of Fourier analysis, this gives a quite short and direct proof of the central limit theorem, although the arguments may seem rather magical to readers who are not already familiar with Fourier methods. Finally, we give a proof based on the moment method, in the spirit of the arguments in the previous notes; this argument is more combinatorial, but is straightforward and is particularly robust, in particular being well equipped to handle some dependencies between components; we will illustrate this by proving the Erdos-Kac law in number theory by this method. Some further discussion of the central limit theorem (including some further proofs, such as one based on Stein’s method) can be found in this blog post. Some further variants of the central limit theorem, such as local limit theorems, stable laws, and large deviation inequalities, will be discussed in the next (and final) set of notes.
The following exercise illustrates the power of the central limit theorem, by establishing combinatorial estimates which would otherwise require the use of Stirling’s formula to establish.
- (i) Show that takes values in with . (This is an example of a binomial distribution.)
- (ii) Assume Stirling’s formula
where is a function of that goes to zero as . (A proof of this formula may be found in this previous blog post.) Using this formula, and without using the central limit theorem, show that
as for any fixed real numbers .
The above special case of the central limit theorem was first established by de Moivre and Laplace.
We close this section with some basic facts about convergence of distribution that will be useful in the sequel.
- (i) If is deterministic, show that converges in distribution to if and only if converges in probability to .
- (ii) Suppose that is independent of for each , and independent of . Show that converges in distribution to if and only if converges in distribution to and converges in distribution to . (The shortest way to prove this is by invoking the Stone-Weierstrass theorem, but one can also proceed by proving some version of Proposition 4.) What happens if the independence hypothesis is dropped?
- (iii) If converges in distribution to , show that for every there exists such that for all sufficiently large . (That is to say, is a tight sequence of random variables.)
- (iv) Show that converges in distribution to if and only if, after extending the probability space model if necessary, one can find copies and of and respectively such that converges almost surely to . (Hint: use the Skorohod representation, Exercise 29 of Notes 0.)
- (v) If converges in distribution to , and is continuous, show that converges in distribution to . Generalise this claim to the case when takes values in an arbitrary locally compact Hausdorff space.
- (vi) (Slutsky’s theorem) If converges in distribution to , and converges in probability to a deterministic limit , show that converges in distribution to , and converges in distribution to . (Hint: either use (iv), or else use (iii) to control some error terms.) This statement combines particularly well with (i). What happens if is not assumed to be deterministic?
- (vii) (Fatou lemma) If is continuous, and converges in distribution to , show that .
- (viii) (Bounded convergence) If is continuous and bounded, and converges in distribution to , show that .
- (ix) (Dominated convergence) If converges in distribution to , and there is an absolutely integrable such that almost surely for all , show that .
For future reference we also mention (but will not prove) Prokhorov’s theorem that gives a partial converse to part (iii) of the above exercise:
Theorem 11 (Prokhorov’s theorem) Let be a sequence of real random variables which is tight (that is, for every there exists such that for all sufficiently large ). Then there exists a subsequence which converges in distribution to some random variable (which may possibly be modeled by a different probability space model than the .)
The proof of this theorem relies on the Riesz representation theorem, and is beyond the scope of this course; but see for instance Exercise 29 of this previous blog post. (See also the closely related Helly selection theorem, covered in Exercise 30 of the same post.)