You are currently browsing the category archive for the ‘math.PR’ category.
We now move away from the world of multiplicative prime number theory covered in Notes 1 and Notes 2, and enter the wider, and complementary, world of non-multiplicative prime number theory, in which one studies statistics related to non-multiplicative patterns, such as twins . This creates a major jump in difficulty; for instance, even the most basic multiplicative result about the primes, namely Euclid’s theorem that there are infinitely many of them, remains unproven for twin primes. Of course, the situation is even worse for stronger results, such as Euler’s theorem, Dirichlet’s theorem, or the prime number theorem. Finally, even many multiplicative questions about the primes remain open. The most famous of these is the Riemann hypothesis, which gives the asymptotic (see Proposition 24 from Notes 2). But even if one assumes the Riemann hypothesis, the precise distribution of the error term in the above asymptotic (or in related asymptotics, such as for the sum that measures the distribution of primes in short intervals) is not entirely clear.
Despite this, we do have a number of extremely convincing and well supported models for the primes (and related objects) that let us predict what the answer to many prime number theory questions (both multiplicative and non-multiplicative) should be, particularly in asymptotic regimes where one can work with aggregate statistics about the primes, rather than with a small number of individual primes. These models are based on taking some statistical distribution related to the primes (e.g. the primality properties of a randomly selected -tuple), and replacing that distribution by a model distribution that is easy to compute with (e.g. a distribution with strong joint independence properties). One can then predict the asymptotic value of various (normalised) statistics about the primes by replacing the relevant statistical distributions of the primes with their simplified models. In this non-rigorous setting, many difficult conjectures on the primes reduce to relatively simple calculations; for instance, all four of the (still unsolved) Landau problems may now be justified in the affirmative by one or more of these models. Indeed, the models are so effective at this task that analytic number theory is in the curious position of being able to confidently predict the answer to a large proportion of the open problems in the subject, whilst not possessing a clear way forward to rigorously confirm these answers!
As it turns out, the models for primes that have turned out to be the most accurate in practice are random models, which involve (either explicitly or implicitly) one or more random variables. This is despite the prime numbers being obviously deterministic in nature; no coins are flipped or dice rolled to create the set of primes. The point is that while the primes have a lot of obvious multiplicative structure (for instance, the product of two primes is never another prime), they do not appear to exhibit much discernible non-multiplicative structure asymptotically, in the sense that they rarely exhibit statistical anomalies in the asymptotic limit that cannot be easily explained in terms of the multiplicative properties of the primes. As such, when considering non-multiplicative statistics of the primes, the primes appear to behave pseudorandomly, and can thus be modeled with reasonable accuracy by a random model. And even for multiplicative problems, which are in principle controlled by the zeroes of the Riemann zeta function, one can obtain good predictions by positing various pseudorandomness properties of these zeroes, so that the distribution of these zeroes can be modeled by a random model.
Of course, one cannot expect perfect accuracy when replicating a deterministic set such as the primes by a probabilistic model of that set, and each of the heuristic models we discuss below have some limitations to the range of statistics about the primes that they can expect to track with reasonable accuracy. For instance, many of the models about the primes do not fully take into account the multiplicative structure of primes, such as the connection with a zeta function with a meromorphic continuation to the entire complex plane; at the opposite extreme, we have the GUE hypothesis which appears to accurately model the zeta function, but does not capture such basic properties of the primes as the fact that the primes are all natural numbers. Nevertheless, each of the models described below, when deployed within their sphere of reasonable application, has (possibly after some fine-tuning) given predictions that are in remarkable agreement with numerical computation and with known rigorous theoretical results, as well as with other models in overlapping spheres of application; they are also broadly compatible with the general heuristic (discussed in this previous post) that in the absence of any exploitable structure, asymptotic statistics should default to the most “uniform”, “pseudorandom”, or “independent” distribution allowable.
As hinted at above, we do not have a single unified model for the prime numbers (other than the primes themselves, of course), but instead have an overlapping family of useful models that each appear to accurately describe some, but not all, aspects of the prime numbers. In this set of notes, we will discuss four such models:
- The Cramér random model and its refinements, which model the set of prime numbers by a random set.
- The Möbius pseudorandomness principle, which predicts that the Möbius function does not correlate with any genuinely different arithmetic sequence of reasonable “complexity”.
- The equidistribution of residues principle, which predicts that the residue classes of a large number modulo a small or medium-sized prime behave as if they are independently and uniformly distributed as varies.
- The GUE hypothesis, which asserts that the zeroes of the Riemann zeta function are distributed (at microscopic and mesoscopic scales) like the zeroes of a GUE random matrix, and which generalises the pair correlation conjecture regarding pairs of such zeroes.
This is not an exhaustive list of models for the primes and related objects; for instance, there is also the model in which the major arc contribution in the Hardy-Littlewood circle method is predicted to always dominate, and with regards to various finite groups of number-theoretic importance, such as the class groups discussed in Supplement 1, there are also heuristics of Cohen-Lenstra type. Historically, the first heuristic discussion of the primes along these lines was by Sylvester, who worked informally with a model somewhat related to the equidistribution of residues principle. However, we will not discuss any of these models here.
A word of warning: the discussion of the above four models will inevitably be largely informal, and “fuzzy” in nature. While one can certainly make precise formalisations of at least some aspects of these models, one should not be inflexibly wedded to a specific such formalisation as being “the” correct way to pin down the model rigorously. (To quote the statistician George Box: “all models are wrong, but some are useful”.) Indeed, we will see some examples below the fold in which some finer structure in the prime numbers leads to a correction term being added to a “naive” implementation of one of the above models to make it more accurate, and it is perfectly conceivable that some further such fine-tuning will be applied to one or more of these models in the future. These sorts of mathematical models are in some ways closer in nature to the scientific theories used to model the physical world, than they are to the axiomatic theories one is accustomed to in rigorous mathematics, and one should approach the discussion below accordingly. In particular, and in contrast to the other notes in this course, the material here is not directly used for proving further theorems, which is why we have marked it as “optional” material. Nevertheless, the heuristics and models here are still used indirectly for such purposes, for instance by
- giving a clearer indication of what results one expects to be true, thus guiding one to fruitful conjectures;
- providing a quick way to scan for possible errors in a mathematical claim (e.g. by finding that the main term is off from what a model predicts, or an error term is too small);
- gauging the relative strength of various assertions (e.g. classifying some results as “unsurprising”, others as “potential breakthroughs” or “powerful new estimates”, others as “unexpected new phenomena”, and yet others as “way too good to be true”); or
- setting up heuristic barriers (such as the parity barrier) that one has to resolve before resolving certain key problems (e.g. the twin prime conjecture).
See also my previous essay on the distinction between “rigorous” and “post-rigorous” mathematics, or Thurston’s essay discussing, among other things, the “definition-theorem-proof” model of mathematics and its limitations.
Remark 1 The material in this set of notes presumes some prior exposure to probability theory. See for instance this previous post for a quick review of the relevant concepts.
Van Vu and I have just uploaded to the arXiv our paper “Random matrices have simple spectrum“. Recall that an Hermitian matrix is said to have simple eigenvalues if all of its eigenvalues are distinct. This is a very typical property of matrices to have: for instance, as discussed in this previous post, in the space of all Hermitian matrices, the space of matrices without all eigenvalues simple has codimension three, and for real symmetric cases this space has codimension two. In particular, given any random matrix ensemble of Hermitian or real symmetric matrices with an absolutely continuous distribution, we conclude that random matrices drawn from this ensemble will almost surely have simple eigenvalues.
For discrete random matrix ensembles, though, the above argument breaks down, even though general universality heuristics predict that the statistics of discrete ensembles should behave similarly to those of continuous ensembles. A model case here is the adjacency matrix of an Erdös-Rényi graph – a graph on vertices in which any pair of vertices has an independent probability of being in the graph. For the purposes of this paper one should view as fixed, e.g. , while is an asymptotic parameter going to infinity. In this context, our main result is the following (answering a question of Babai):
Our argument works for more general Wigner-type matrix ensembles, but for sake of illustration we will stick with the Erdös-Renyi case. Previous work on local universality for such matrix models (e.g. the work of Erdos, Knowles, Yau, and Yin) was able to show that any individual eigenvalue gap did not vanish with probability (in fact for some absolute constant ), but because there are different gaps that one has to simultaneously ensure to be non-zero, this did not give Theorem 1 as one is forced to apply the union bound.
Our argument in fact gives simplicity of the spectrum with probability for any fixed ; in a subsequent paper we also show that it gives a quantitative lower bound on the eigenvalue gaps (analogous to how many results on the singularity probability of random matrices can be upgraded to a bound on the least singular value).
The basic idea of argument can be sketched as follows. Suppose that has a repeated eigenvalue . We split
for a random minor and a random sign vector ; crucially, and are independent. If has a repeated eigenvalue , then by the Cauchy interlacing law, also has an eigenvalue . We now write down the eigenvector equation for at :
Extracting the top coefficients, we obtain
If we let be the -eigenvector of , then by taking inner products with we conclude that
we typically expect to be non-zero, in which case we arrive at
In other words, in order for to have a repeated eigenvalue, the top right column of has to be orthogonal to an eigenvector of the minor . Note that and are going to be independent (once we specify which eigenvector of to take as ). On the other hand, thanks to inverse Littlewood-Offord theory (specifically, we use an inverse Littlewood-Offord theorem of Nguyen and Vu), we know that the vector is unlikely to be orthogonal to any given vector independent of , unless the coefficients of are extremely special (specifically, that most of them lie in a generalised arithmetic progression). The main remaining difficulty is then to show that eigenvectors of a random matrix are typically not of this special form, and this relies on a conditioning argument originally used by Komlós to bound the singularity probability of a random sign matrix. (Basically, if an eigenvector has this special form, then one can use a fraction of the rows and columns of the random matrix to determine the eigenvector completely, while still preserving enough randomness in the remaining portion of the matrix so that this vector will in fact not be an eigenvector with high probability.)
The 2014 Fields medallists have just been announced as (in alphabetical order of surname) Artur Avila, Manjul Bhargava, Martin Hairer, and Maryam Mirzakhani (see also these nice video profiles for the winners, which is a new initiative of the IMU and the Simons foundation). This time four years ago, I wrote a blog post discussing one result from each of the 2010 medallists; I thought I would try to repeat the exercise here, although the work of the medallists this time around is a little bit further away from my own direct area of expertise than last time, and so my discussion will unfortunately be a bit superficial (and possibly not completely accurate) in places. As before, I am picking these results based on my own idiosyncratic tastes, and they should not be viewed as necessarily being the “best” work of these medallists. (See also the press releases for Avila, Bhargava, Hairer, and Mirzakhani.)
Artur Avila works in dynamical systems and in the study of Schrödinger operators. The work of Avila that I am most familiar with is his solution with Svetlana Jitormiskaya of the ten martini problem of Kac, the solution to which (according to Barry Simon) he offered ten martinis for, hence the name. The problem involves perhaps the simplest example of a Schrödinger operator with non-trivial spectral properties, namely the almost Mathieu operator defined for parameters and by a discrete one-dimensional Schrödinger operator with cosine potential:
This is a bounded self-adjoint operator and thus has a spectrum that is a compact subset of the real line; it arises in a number of physical contexts, most notably in the theory of the integer quantum Hall effect, though I will not discuss these applications here. Remarkably, the structure of this spectrum depends crucially on the Diophantine properties of the frequency . For instance, if is a rational number, then the operator is periodic with period , and then basic (discrete) Floquet theory tells us that the spectrum is simply the union of (possibly touching) intervals. But for irrational (in which case the spectrum is independent of the phase ), the situation is much more fractal in nature, for instance in the critical case the spectrum (as a function of ) gives rise to the Hofstadter butterfly. The “ten martini problem” asserts that for every irrational and every choice of coupling constant , the spectrum is homeomorphic to a Cantor set. Prior to the work of Avila and Jitormiskaya, there were a number of partial results on this problem, notably the result of Puig establishing Cantor spectrum for a full measure set of parameters , as well as results requiring a perturbative hypothesis, such as being very small or very large. The result was also already known for being either very close to rational (i.e. a Liouville number) or very far from rational (a Diophantine number), although the analyses for these two cases failed to meet in the middle, leaving some cases untreated. The argument uses a wide variety of existing techniques, both perturbative and non-perturbative, to attack this problem, as well as an amusing argument by contradiction: they assume (in certain regimes) that the spectrum fails to be a Cantor set, and use this hypothesis to obtain additional Lipschitz control on the spectrum (as a function of the frequency ), which they can then use (after much effort) to improve existing arguments and conclude that the spectrum was in fact Cantor after all!
Manjul Bhargava produces amazingly beautiful mathematics, though most of it is outside of my own area of expertise. One part of his work that touches on an area of my own interest (namely, random matrix theory) is his ongoing work with many co-authors on modeling (both conjecturally and rigorously) the statistics of various key number-theoretic features of elliptic curves (such as their rank, their Selmer group, or their Tate-Shafarevich groups). For instance, with Kane, Lenstra, Poonen, and Rains, Manjul has proposed a very general random matrix model that predicts all of these statistics (for instance, predicting that the -component of the Tate-Shafarevich group is distributed like the cokernel of a certain random -adic matrix, very much in the spirit of the Cohen-Lenstra heuristics discussed in this previous post). But what is even more impressive is that Manjul and his coauthors have been able to verify several non-trivial fragments of this model (e.g. showing that certain moments have the predicted asymptotics), giving for the first time non-trivial upper and lower bounds for various statistics, for instance obtaining lower bounds on how often an elliptic curve has rank or rank , leading most recently (in combination with existing work of Gross-Zagier and of Kolyvagin, among others) to his amazing result with Skinner and Zhang that at least of all elliptic curves over (ordered by height) obey the Birch and Swinnerton-Dyer conjecture. Previously it was not even known that a positive proportion of curves obeyed the conjecture. This is still a fair ways from resolving the conjecture fully (in particular, the situation with the presumably small number of curves of rank and higher is still very poorly understood, and the theory of Gross-Zagier and Kolyvagin that this work relies on, which was initially only available for , has only been extended to totally real number fields thus far, by the work of Zhang), but it certainly does provide hope that the conjecture could be within reach in a statistical sense at least.
Martin Hairer works in at the interface between probability and partial differential equations, and in particular in the theory of stochastic differential equations (SDEs). The result of his that is closest to my own interests is his remarkable demonstration with Jonathan Mattingly of unique invariant measure for the two-dimensional stochastically forced Navier-Stokes equation
on the two-torus , where is a Gaussian field that forces a fixed set of frequencies. It is expected that for any reasonable choice of initial data, the solution to this equation should asymptotically be distributed according to Kolmogorov’s power law, as discussed in this previous post. This is still far from established rigorously (although there are some results in this direction for dyadic models, see e.g. this paper of Cheskidov, Shvydkoy, and Friedlander). However, Hairer and Mattingly were able to show that there was a unique probability distribution to almost every initial data would converge to asymptotically; by the ergodic theorem, this is equivalent to demonstrating the existence and uniqueness of an invariant measure for the flow. Existence can be established using standard methods, but uniqueness is much more difficult. One of the standard routes to uniqueness is to establish a “strong Feller property” that enforces some continuity on the transition operators; among other things, this would mean that two ergodic probability measures with intersecting supports would in fact have a non-trivial common component, contradicting the ergodic theorem (which forces different ergodic measures to be mutually singular). Since all ergodic measures for Navier-Stokes can be seen to contain the origin in their support, this would give uniqueness. Unfortunately, the strong Feller property is unlikely to hold in the infinite-dimensional phase space for Navier-Stokes; but Hairer and Mattingly develop a clean abstract substitute for this property, which they call the asymptotic strong Feller property, which is again a regularity property on the transition operator; this in turn is then demonstrated by a careful application of Malliavin calculus.
Maryam Mirzakhani has mostly focused on the geometry and dynamics of Teichmuller-type moduli spaces, such as the moduli space of Riemann surfaces with a fixed genus and a fixed number of cusps (or with a fixed number of boundaries that are geodesics of a prescribed length). These spaces have an incredibly rich structure, ranging from geometric structure (such as the Kahler geometry given by the Weil-Petersson metric), to dynamical structure (through the action of the mapping class group on this and related spaces), to algebraic structure (viewing these spaces as algebraic varieties), and are thus connected to many other objects of interest in geometry and dynamics. For instance, by developing a new recursive formula for the Weil-Petersson volume of this space, Mirzakhani was able to asymptotically count the number of simple prime geodesics of length up to some threshold in a hyperbolic surface (or more precisely, she obtained asymptotics for the number of such geodesics in a given orbit of the mapping class group); the answer turns out to be polynomial in , in contrast to the much larger class of non-simple prime geodesics, whose asymptotics are exponential in (the “prime number theorem for geodesics”, developed in a classic series of works by Delsart, Huber, Selberg, and Margulis); she also used this formula to establish a new proof of a conjecture of Witten on intersection numbers that was first proven by Kontsevich. More recently, in two lengthy papers with Eskin and with Eskin-Mohammadi, Mirzakhani established rigidity theorems for the action of on such moduli spaces that are close analogues of Ratner’s celebrated rigidity theorems for unipotently generated groups (discussed in this previous blog post). Ratner’s theorems are already notoriously difficult to prove, and rely very much on the polynomial stability properties of unipotent flows; in this even more complicated setting, the unipotent flows are no longer tractable, and Mirzakhani instead uses a recent “exponential drift” method of Benoist and Quint with as a substitute. Ratner’s theorems are incredibly useful for all sorts of problems connected to homogeneous dynamics, and the analogous theorems established by Mirzakhani, Eskin, and Mohammadi have a similarly broad range of applications, for instance in counting periodic billiard trajectories in rational polygons.
In the traditional foundations of probability theory, one selects a probability space , and makes a distinction between deterministic mathematical objects, which do not depend on the sampled state , and stochastic (or random) mathematical objects, which do depend (but in a measurable fashion) on the sampled state . For instance, a deterministic real number would just be an element , whereas a stochastic real number (or real random variable) would be a measurable function , where in this post will always be endowed with the Borel -algebra. (For readers familiar with nonstandard analysis, the adjectives “deterministic” and “stochastic” will be used here in a manner analogous to the uses of the adjectives “standard” and “nonstandard” in nonstandard analysis. The analogy is particularly close when comparing with the “cheap nonstandard analysis” discussed in this previous blog post. We will also use “relative to ” as a synonym for “stochastic”.)
Actually, for our purposes we will adopt the philosophy of identifying stochastic objects that agree almost surely, so if one was to be completely precise, we should define a stochastic real number to be an equivalence class of measurable functions , up to almost sure equivalence. However, we shall often abuse notation and write simply as .
More generally, given any measurable space , we can talk either about deterministic elements , or about stochastic elements of , that is to say equivalence classes of measurable maps up to almost sure equivalence. We will use to denote the set of all stochastic elements of . (For readers familiar with sheaves, it may helpful for the purposes of this post to think of as the space of measurable global sections of the trivial –bundle over .) Of course every deterministic element of can also be viewed as a stochastic element given by (the equivalence class of) the constant function , thus giving an embedding of into . We do not attempt here to give an interpretation of for sets that are not equipped with a -algebra .
Remark 1 In my previous post on the foundations of probability theory, I emphasised the freedom to extend the sample space to a larger sample space whenever one wished to inject additional sources of randomness. This is of course an important freedom to possess (and in the current formalism, is the analogue of the important operation of base change in algebraic geometry), but in this post we will focus on a single fixed sample space , and not consider extensions of this space, so that one only has to consider two types of mathematical objects (deterministic and stochastic), as opposed to having many more such types, one for each potential choice of sample space (with the deterministic objects corresponding to the case when the sample space collapses to a point).
Any (measurable) -ary operation on deterministic mathematical objects then extends to their stochastic counterparts by applying the operation pointwise. For instance, the addition operation on deterministic real numbers extends to an addition operation , by defining the class for to be the equivalence class of the function ; this operation is easily seen to be well-defined. More generally, any measurable -ary deterministic operation between measurable spaces extends to an stochastic operation in the obvious manner.
There is a similar story for -ary relations , although here one has to make a distinction between a deterministic reading of the relation and a stochastic one. Namely, if we are given stochastic objects for , the relation does not necessarily take values in the deterministic Boolean algebra , but only in the stochastic Boolean algebra – thus may be true with some positive probability and also false with some positive probability (with the event that being stochastically true being determined up to null events). Of course, the deterministic Boolean algebra embeds in the stochastic one, so we can talk about a relation being determinstically true or deterministically false, which (due to our identification of stochastic objects that agree almost surely) means that is almost surely true or almost surely false respectively. For instance given two stochastic objects , one can view their equality relation as having a stochastic truth value. This is distinct from the way the equality symbol is used in mathematical logic, which we will now call “equality in the deterministic sense” to reduce confusion. Thus, in the deterministic sense if and only if the stochastic truth value of is equal to , that is to say that for almost all .
Any universal identity for deterministic operations (or universal implication between identities) extends to their stochastic counterparts: for instance, addition is commutative, associative, and cancellative on the space of deterministic reals , and is therefore commutative, associative, and cancellative on stochastic reals as well. However, one has to be more careful when working with mathematical laws that are not expressible as universal identities, or implications between identities. For instance, is an integral domain: if are deterministic reals such that , then one must have or . However, if are stochastic reals such that (in the deterministic sense), then it is no longer necessarily the case that (in the deterministic sense) or that (in the deterministic sense); however, it is still true that “ or ” is true in the deterministic sense if one interprets the boolean operator “or” stochastically, thus “ or ” is true for almost all . Another way to properly obtain a stochastic interpretation of the integral domain property of is to rewrite it as
and then make all sets stochastic to obtain the true statement
thus we have to allow the index for which vanishing occurs to also be stochastic, rather than deterministic. (A technical note: when one proves this statement, one has to select in a measurable fashion; for instance, one can choose to equal when , and otherwise (so that in the “tie-breaking” case when and both vanish, one always selects to equal ).)
Similarly, the law of the excluded middle fails when interpreted deterministically, but remains true when interpreted stochastically: if is a stochastic statement, then it is not necessarily the case that is either deterministically true or deterministically false; however the sentence “ or not-” is still deterministically true if the boolean operator “or” is interpreted stochastically rather than deterministically.
To avoid having to keep pointing out which operations are interpreted stochastically and which ones are interpreted deterministically, we will use the following convention: if we assert that a mathematical sentence involving stochastic objects is true, then (unless otherwise specified) we mean that is deterministically true, assuming that all relations used inside are interpreted stochastically. For instance, if are stochastic reals, when we assert that “Exactly one of , , or is true”, then by default it is understood that the relations , , and the boolean operator “exactly one of” are interpreted stochastically, and the assertion is that the sentence is deterministically true.
In the above discussion, the stochastic objects being considered were elements of a deterministic space , such as the reals . However, it can often be convenient to generalise this situation by allowing the ambient space to also be stochastic. For instance, one might wish to consider a stochastic vector inside a stochastic vector space , or a stochastic edge of a stochastic graph . In order to formally describe this situation within the classical framework of measure theory, one needs to place all the ambient spaces inside a measurable space. This can certainly be done in many contexts (e.g. when considering random graphs on a deterministic set of vertices, or if one is willing to work up to equivalence and place the ambient spaces inside a suitable moduli space), but is not completely natural in other contexts. For instance, if one wishes to consider stochastic vector spaces of potentially unbounded dimension (in particular, potentially larger than any given cardinal that one might specify in advance), then the class of all possible vector spaces is so large that it becomes a proper class rather than a set (even if one works up to equivalence), making it problematic to give this class the structure of a measurable space; furthermore, even once one does so, one needs to take additional care to pin down what it would mean for a random vector lying in a random vector space to depend “measurably” on .
Of course, in any reasonable application one can avoid the set theoretic issues at least by various ad hoc means, for instance by restricting the dimension of all spaces involved to some fixed cardinal such as . However, the measure-theoretic issues can require some additional effort to resolve properly.
In this post I would like to describe a different way to formalise stochastic spaces, and stochastic elements of these spaces, by viewing the spaces as measure-theoretic analogue of a sheaf, but being over the probability space rather than over a topological space; stochastic objects are then sections of such sheaves. Actually, for minor technical reasons it is convenient to work in the slightly more general setting in which the base space is a finite measure space rather than a probability space, thus can take any value in rather than being normalised to equal . This will allow us to easily localise to subevents of without the need for normalisation, even when is a null event (though we caution that the map from deterministic objects ceases to be injective in this latter case). We will however still continue to use probabilistic terminology. despite the lack of normalisation; thus for instance, sets in will be referred to as events, the measure of such a set will be referred to as the probability (which is now permitted to exceed in some cases), and an event whose complement is a null event shall be said to hold almost surely. It is in fact likely that almost all of the theory below extends to base spaces which are -finite rather than finite (for instance, by damping the measure to become finite, without introducing any further null events), although we will not pursue this further generalisation here.
The approach taken in this post is “topos-theoretic” in nature (although we will not use the language of topoi explicitly here), and is well suited to a “pointless” or “point-free” approach to probability theory, in which the role of the stochastic state is suppressed as much as possible; instead, one strives to always adopt a “relative point of view”, with all objects under consideration being viewed as stochastic objects relative to the underlying base space . In this perspective, the stochastic version of a set is as follows.
Definition 1 (Stochastic set) Unless otherwise specified, we assume that we are given a fixed finite measure space (which we refer to as the base space). A stochastic set (relative to ) is a tuple consisting of the following objects:
- A set assigned to each event ; and
- A restriction map from to to each pair of nested events . (Strictly speaking, one should indicate the dependence on in the notation for the restriction map, e.g. using instead of , but we will abuse notation by omitting the dependence.)
We refer to elements of as local stochastic elements of the stochastic set , localised to the event , and elements of as global stochastic elements (or simply elements) of the stochastic set. (In the language of sheaves, one would use “sections” instead of “elements” here, but I prefer to use the latter terminology here, for compatibility with conventional probabilistic notation, where for instance measurable maps from to are referred to as real random variables, rather than sections of the reals.)
Furthermore, we impose the following axioms:
- (Category) The map from to is the identity map, and if are events in , then for all .
- (Null events trivial) If is a null event, then the set is a singleton set. (In particular, is always a singleton set; this is analogous to the convention that for any number .)
- (Countable gluing) Suppose that for each natural number , one has an event and an element such that for all . Then there exists a unique such that for all .
If is an event in , we define the localisation of the stochastic set to to be the stochastic set
relative to . (Note that there is no need to renormalise the measure on , as we are not demanding that our base space have total measure .)
The following fact is useful for actually verifying that a given object indeed has the structure of a stochastic set:
Exercise 1 Show that to verify the countable gluing axiom of a stochastic set, it suffices to do so under the additional hypothesis that the events are disjoint. (Note that this is quite different from the situation with sheaves over a topological space, in which the analogous gluing axiom is often trivial in the disjoint case but has non-trivial content in the overlapping case. This is ultimately because a -algebra is closed under all Boolean operations, whereas a topology is only closed under union and intersection.)
Let us illustrate the concept of a stochastic set with some examples.
Example 1 (Discrete case) A simple case arises when is a discrete space which is at most countable. If we assign a set to each , with a singleton if . One then sets , with the obvious restriction maps, giving rise to a stochastic set . (Thus, a local element of can be viewed as a map on that takes values in for each .) Conversely, it is not difficult to see that any stochastic set over an at most countable discrete probability space is of this form up to isomorphism. In this case, one can think of as a bundle of sets over each point (of positive probability) in the base space . One can extend this bundle interpretation of stochastic sets to reasonably nice sample spaces (such as standard Borel spaces) and similarly reasonable ; however, I would like to avoid this interpretation in the formalism below in order to be able to easily work in settings in which and are very “large” (e.g. not separable in any reasonable sense). Note that we permit some of the to be empty, thus it can be possible for to be empty whilst for some strict subevents of to be non-empty. (This is analogous to how it is possible for a sheaf to have local sections but no global sections.) As such, the space of global elements does not completely determine the stochastic set ; one sometimes needs to localise to an event in order to see the full structure of such a set. Thus it is important to distinguish between a stochastic set and its space of global elements. (As such, it is a slight abuse of the axiom of extensionality to refer to global elements of simply as “elements”, but hopefully this should not cause too much confusion.)
Example 2 (Measurable spaces as stochastic sets) Returning now to a general base space , any (deterministic) measurable space gives rise to a stochastic set , with being defined as in previous discussion as the measurable functions from to modulo almost everywhere equivalence (in particular, a singleton set when is null), with the usual restriction maps. The constraint of measurability on the maps , together with the quotienting by almost sure equivalence, means that is now more complicated than a plain Cartesian product of fibres, but this still serves as a useful first approximation to what is for the purposes of developing intuition. Indeed, the measurability constraint is so weak (as compared for instance to topological or smooth constraints in other contexts, such as sheaves of continuous or smooth sections of bundles) that the intuition of essentially independent fibres is quite an accurate one, at least if one avoids consideration of an uncountable number of objects simultaneously.
Example 3 (Extended Hilbert modules) This example is the one that motivated this post for me. Suppose that one has an extension of the base space , thus we have a measurable factor map such that the pushforward of the measure by is equal to . Then we have a conditional expectation operator , defined as the adjoint of the pullback map . As is well known, the conditional expectation operator also extends to a contraction ; by monotone convergence we may also extend to a map from measurable functions from to the extended non-negative reals , to measurable functions from to . We then define the “extended Hilbert module” to be the space of functions with finite almost everywhere. This is an extended version of the Hilbert module , which is defined similarly except that is required to lie in ; this is a Hilbert module over which is of particular importance in the Furstenberg-Zimmer structure theory of measure-preserving systems. We can then define the stochastic set by setting
with the obvious restriction maps. In the case that are standard Borel spaces, one can disintegrate as an integral of probability measures (supported in the fibre ), in which case this stochastic set can be viewed as having fibres (though if is not discrete, there are still some measurability conditions in on the local and global elements that need to be imposed). However, I am interested in the case when are not standard Borel spaces (in fact, I will take them to be algebraic probability spaces, as defined in this previous post), in which case disintegrations are not available. However, it appears that the stochastic analysis developed in this blog post can serve as a substitute for the tool of disintegration in this context.
We make the remark that if is a stochastic set and are events that are equivalent up to null events, then one can identify with (through their common restriction to , with the restriction maps now being bijections). As such, the notion of a stochastic set does not require the full structure of a concrete probability space ; one could also have defined the notion using only the abstract -algebra consisting of modulo null events as the base space, or equivalently one could define stochastic sets over the algebraic probability spaces defined in this previous post. However, we will stick with the classical formalism of concrete probability spaces here so as to keep the notation reasonably familiar.
As a corollary of the above observation, we see that if the base space has total measure , then all stochastic sets are trivial (they are just points).
Exercise 2 If is a stochastic set, show that there exists an event with the property that for any event , is non-empty if and only if is contained in modulo null events. (In particular, is unique up to null events.) Hint: consider the numbers for ranging over all events with non-empty, and form a maximising sequence for these numbers. Then use all three axioms of a stochastic set.
One can now start take many of the fundamental objects, operations, and results in set theory (and, hence, in most other categories of mathematics) and establish analogues relative to a finite measure space. Implicitly, what we will be doing in the next few paragraphs is endowing the category of stochastic sets with the structure of an elementary topos. However, to keep things reasonably concrete, we will not explicitly emphasise the topos-theoretic formalism here, although it is certainly lurking in the background.
Firstly, we define a stochastic function between two stochastic sets to be a collection of maps for each which form a natural transformation in the sense that for all and nested events . In the case when is discrete and at most countable (and after deleting all null points), a stochastic function is nothing more than a collection of functions for each , with the function then being a direct sum of the factor functions :
Thus (in the discrete, at most countable setting, at least) stochastic functions do not mix together information from different states in a sample space; the value of at depends only on the value of at . The situation is a bit more subtle for continuous probability spaces, due to the identification of stochastic objects that agree almost surely, nevertheness it is still good intuition to think of stochastic functions as essentially being “pointwise” or “local” in nature.
One can now form the stochastic set of functions from to , by setting for any event to be the set of local stochastic functions of the localisations of to ; this is a stochastic set if we use the obvious restriction maps. In the case when is discrete and at most countable, the fibre at a point of positive measure is simply the set of functions from to .
In a similar spirit, we say that one stochastic set is a (stochastic) subset of another , and write , if we have a stochastic inclusion map, thus for all events , with the restriction maps being compatible. We can then define the power set of a stochastic set by setting for any event to be the set of all stochastic subsets of relative to ; it is easy to see that is a stochastic set with the obvious restriction maps (one can also identify with in the obvious fashion). Again, when is discrete and at most countable, the fibre of at a point of positive measure is simply the deterministic power set .
Note that if is a stochastic function and is a stochastic subset of , then the inverse image , defined by setting for any event to be the set of those with , is a stochastic subset of . In particular, given a -ary relation , the inverse image is a stochastic subset of , which by abuse of notation we denote as
In a similar spirit, if is a stochastic subset of and is a stochastic function, we can define the image by setting to be the set of those with ; one easily verifies that this is a stochastic subset of .
Remark 2 One should caution that in the definition of the subset relation , it is important that for all events , not just the global event ; in particular, just because a stochastic set has no global sections, does not mean that it is contained in the stochastic empty set .
Now we discuss Boolean operations on stochastic subsets of a given stochastic set . Given two stochastic subsets of , the stochastic intersection is defined by setting to be the set of that lie in both and :
This is easily verified to again be a stochastic subset of . More generally one may define stochastic countable intersections for any sequence of stochastic subsets of . One could extend this definition to uncountable families if one wished, but I would advise against it, because some of the usual laws of Boolean algebra (e.g. the de Morgan laws) may break down in this setting.
Stochastic unions are a bit more subtle. The set should not be defined to simply be the union of and , as this would not respect the gluing axiom. Instead, we define to be the set of all such that one can cover by measurable subevents such that for ; then may be verified to be a stochastic subset of . Thus for instance is the stochastic union of and . Similarly for countable unions of stochastic subsets of , although for uncountable unions are extremely problematic (they are disliked by both the measure theory and the countable gluing axiom) and will not be defined here. Finally, the stochastic difference set is defined as the set of all in such that for any subevent of of positive probability. One may verify that in the case when is discrete and at most countable, these Boolean operations correspond to the classical Boolean operations applied separately to each fibre of the relevant sets . We also leave as an exercise to the reader to verify the usual laws of Boolean arithmetic, e.g. the de Morgan laws, provided that one works with at most countable unions and intersections.
One can also consider a stochastic finite union in which the number of sets in the union is itself stochastic. More precisely, let be a stochastic set, let be a stochastic natural number, and let be a stochastic function from the stochastic set (defined by setting )) to the stochastic power set . Here we are considering to be a natural number, to allow for unions that are possibly empty, with used for the positive natural numbers. We also write for the stochastic function . Then we can define the stochastic union by setting for an event to be the set of local elements with the property that there exists a covering of by measurable subevents for , such that one has and . One can verify that is a stochastic set (with the obvious restriction maps). Again, in the model case when is discrete and at most countable, the fibre is what one would expect it to be, namely .
The Cartesian product of two stochastic sets may be defined by setting for all events , with the obvious restriction maps; this is easily seen to be another stochastic set. This lets one define the concept of a -ary operation from stochastic sets to another stochastic set , or a -ary relation . In particular, given for , the relation may be deterministically true, deterministically false, or have some other stochastic truth value.
Remark 3 In the degenerate case when is null, stochastic logic becomes a bit weird: all stochastic statements are deterministically true, as are their stochastic negations, since every event in (even the empty set) now holds with full probability. Among other pathologies, the empty set now has a global element over (this is analogous to the notorious convention ), and any two deterministic objects become equal over : .
The following simple observation is crucial to subsequent discussion. If is a sequence taking values in the global elements of a stochastic space , then we may also define global elements for stochastic indices as well, by appealing to the countable gluing axiom to glue together restricted to the set for each deterministic natural number to form . With this definition, the map is a stochastic function from to ; indeed, this creates a one-to-one correspondence between external sequences (maps from to ) and stochastic sequences (stochastic functions from to ). Similarly with replaced by any other at most countable set. This observation will be important in allowing many deterministic arguments involving sequences will be able to be carried over to the stochastic setting.
We now specialise from the extremely broad discipline of set theory to the more focused discipline of real analysis. There are two fundamental axioms that underlie real analysis (and in particular distinguishes it from real algebra). The first is the Archimedean property, which we phrase in the “no infinitesimal” formulation as follows:
Proposition 2 (Archimedean property) Let be such that for all positive natural numbers . Then .
The other is the least upper bound axiom:
Proposition 3 (Least upper bound axiom) Let be a non-empty subset of which has an upper bound , thus for all . Then there exists a unique real number with the following properties:
- for all .
- For any real , there exists such that .
- .
Furthermore, does not depend on the choice of .
The Archimedean property extends easily to the stochastic setting:
Proposition 4 (Stochastic Archimedean property) Let be such that for all deterministic natural numbers . Then .
Remark 4 Here, incidentally, is one place in which this stochastic formalism deviates from the nonstandard analysis formalism, as the latter certainly permits the existence of infinitesimal elements. On the other hand, we caution that stochastic real numbers are permitted to be unbounded, so that formulation of Archimedean property is not valid in the stochastic setting.
The proof is easy and is left to the reader. The least upper bound axiom also extends nicely to the stochastic setting, but the proof requires more work (in particular, our argument uses the monotone convergence theorem):
Theorem 5 (Stochastic least upper bound axiom) Let be a stochastic subset of which has a global upper bound , thus for all , and is globally non-empty in the sense that there is at least one global element . Then there exists a unique stochastic real number with the following properties:
- for all .
- For any stochastic real , there exists such that .
- .
Furthermore, does not depend on the choice of .
For future reference, we note that the same result holds with replaced by throughout, since the latter may be embedded in the former, for instance by mapping to and to . In applications, the above theorem serves as a reasonable substitute for the countable axiom of choice, which does not appear to hold in unrestricted generality relative to a measure space; in particular, it can be used to generate various extremising sequences for stochastic functionals on various stochastic function spaces.
Proof: Uniqueness is clear (using the Archimedean property), as well as the independence on , so we turn to existence. By using an order-preserving map from to (e.g. ) we may assume that is a subset of , and that .
We observe that is a lattice: if , then and also lie in . Indeed, may be formed by appealing to the countable gluing axiom to glue (restricted the set ) with (restricted to the set ), and similarly for . (Here we use the fact that relations such as are Borel measurable on .)
Let denote the deterministic quantity
then (by Proposition 3!) is well-defined; here we use the hypothesis that is finite. Thus we may find a sequence of elements of such that
Using the lattice property, we may assume that the are non-decreasing: whenever . If we then define (after choosing measurable representatives of each equivalence class ), then is a stochastic real with .
If , then , and so
From this and (1) we conclude that
From monotone convergence, we conclude that
and so , as required.
Now let be a stochastic real. After choosing measurable representatives of each relevant equivalence class, we see that for almost every , we can find a natural number with . If we choose to be the first such positive natural number when it exists, and (say) otherwise, then is a stochastic positive natural number and . The claim follows.
Remark 5 One can abstract away the role of the measure here, leaving only the ideal of null sets. The property that the measure is finite is then replaced by the more general property that given any non-empty family of measurable sets, there is an at most countable union of sets in that family that is an upper bound modulo null sets for all elements in that faily.
Using Proposition 4 and Theorem 5, one can then revisit many of the other foundational results of deterministic real analysis, and develop stochastic analogues; we give some examples of this below the fold (focusing on the Heine-Borel theorem and a case of the spectral theorem). As an application of this formalism, we revisit some of the Furstenberg-Zimmer structural theory of measure-preserving systems, particularly that of relatively compact and relatively weakly mixing systems, and interpret them in this framework, basically as stochastic versions of compact and weakly mixing systems (though with the caveat that the shift map is allowed to act non-trivially on the underlying probability space). As this formalism is “point-free”, in that it avoids explicit use of fibres and disintegrations, it will be well suited for generalising this structure theory to settings in which the underlying probability spaces are not standard Borel, and the underlying groups are uncountable; I hope to discuss such generalisations in future blog posts.
Remark 6 Roughly speaking, stochastic real analysis can be viewed as a restricted subset of classical real analysis in which all operations have to be “measurable” with respect to the base space. In particular, indiscriminate application of the axiom of choice is not permitted, and one should largely restrict oneself to performing countable unions and intersections rather than arbitrary unions or intersections. Presumably one can formalise this intuition with a suitable “countable transfer principle”, but I was not able to formulate a clean and general principle of this sort, instead verifying various assertions about stochastic objects by hand rather than by direct transfer from the deterministic setting. However, it would be desirable to have such a principle, since otherwise one is faced with the tedious task of redoing all the foundations of real analysis (or whatever other base theory of mathematics one is going to be working in) in the stochastic setting by carefully repeating all the arguments.
More generally, topos theory is a good formalism for capturing precisely the informal idea of performing mathematics with certain operations, such as the axiom of choice, the law of the excluded middle, or arbitrary unions and intersections, being somehow “prohibited” or otherwise “restricted”.
As laid out in the foundational work of Kolmogorov, a classical probability space (or probability space for short) is a triplet , where is a set, is a -algebra of subsets of , and is a countably additive probability measure on . Given such a space, one can form a number of interesting function spaces, including
- the (real) Hilbert space of square-integrable functions , modulo -almost everywhere equivalence, and with the positive definite inner product ; and
- the unital commutative Banach algebra of essentially bounded functions , modulo -almost everywhere equivalence, with defined as the essential supremum of .
There is also a trace on defined by integration: .
One can form the category of classical probability spaces, by defining a morphism between probability spaces to be a function which is measurable (thus for all ) and measure-preserving (thus for all ).
Let us now abstract the algebraic features of these spaces as follows; for want of a better name, I will refer to this abstraction as an algebraic probability space, and is very similar to the non-commutative probability spaces studied in this previous post, except that these spaces are now commutative (and real).
Definition 1 An algebraic probability space is a pair where
- is a unital commutative real algebra;
- is a homomorphism such that and for all ;
- Every element of is bounded in the sense that . (Technically, this isn’t an algebraic property, but I need it for technical reasons.)
A morphism is a homomorphism which is trace-preserving, in the sense that for all .
For want of a better name, I’ll denote the category of algebraic probability spaces as . One can view this category as the opposite category to that of (a subcategory of) the category of tracial commutative real algebras. One could emphasise this opposite nature by denoting the algebraic probability space as rather than ; another suggestive (but slightly inaccurate) notation, inspired by the language of schemes, would be rather than . However, we will not adopt these conventions here, and refer to algebraic probability spaces just by the pair .
By the previous discussion, we have a covariant functor that takes a classical probability space to its algebraic counterpart , with a morphism of classical probability spaces mapping to a morphism of the corresponding algebraic probability spaces by the formula
for . One easily verifies that this is a functor.
In this post I would like to describe a functor which partially inverts (up to natural isomorphism), that is to say a recipe for starting with an algebraic probability space and producing a classical probability space . This recipe is not new – it is basically the (commutative) Gelfand-Naimark-Segal construction (discussed in this previous post) combined with the Loomis-Sikorski theorem (discussed in this previous post). However, I wanted to put the construction in a single location for sake of reference. I also wanted to make the point that and are not complete inverses; there is a bit of information in the algebraic probability space (e.g. topological information) which is lost when passing back to the classical probability space. In some future posts, I would like to develop some ergodic theory using the algebraic foundations of probability theory rather than the classical foundations; this turns out to be convenient in the ergodic theory arising from nonstandard analysis (such as that described in this previous post), in which the groups involved are uncountable and the underlying spaces are not standard Borel spaces.
Let us describe how to construct the functor , with details postponed to below the fold.
- Starting with an algebraic probability space , form an inner product on by the formula , and also form the spectral radius .
- The inner product is clearly positive semi-definite. Quotienting out the null vectors and taking completions, we arrive at a real Hilbert space , to which the trace may be extended.
- Somewhat less obviously, the spectral radius is well-defined and gives a norm on . Taking limits of sequences in of bounded spectral radius gives us a subspace of that has the structure of a real commutative Banach algebra.
- The idempotents of the Banach algebra may be indexed by elements of an abstract -algebra .
- The Boolean algebra homomorphisms (or equivalently, the real algebra homomorphisms ) may be indexed by elements of a space .
- Let denote the -algebra on generated by the basic sets for every .
- Let be the -ideal of generated by the sets , where is a sequence with .
- One verifies that is isomorphic to . Using this isomorphism, the trace on can be used to construct a countably additive measure on . The classical probability space is then , and the abstract spaces may now be identified with their concrete counterparts , .
- Every algebraic probability space morphism generates a classical probability morphism via the formula
using a pullback operation on the abstract -algebras that can be defined by density.
Remark 1 The classical probability space constructed by the functor has some additional structure; namely is a -Stone space (a Stone space with the property that the closure of any countable union of clopen sets is clopen), is the Baire -algebra (generated by the clopen sets), and the null sets are the meager sets. However, we will not use this additional structure here.
The partial inversion relationship between the functors and is given by the following assertion:
- There is a natural transformation from to the identity functor .
More informally: if one starts with an algebraic probability space and converts it back into a classical probability space , then there is a trace-preserving algebra homomorphism of to , which respects morphisms of the algebraic probability space. While this relationship is far weaker than an equivalence of categories (which would require that and are both natural isomorphisms), it is still good enough to allow many ergodic theory problems formulated using classical probability spaces to be reformulated instead as an equivalent problem in algebraic probability spaces.
Remark 2 The opposite composition is a little odd: it takes an arbitrary probability space and returns a more complicated probability space , with being the space of homomorphisms . while there is “morally” an embedding of into using the evaluation map, this map does not exist in general because points in may well have zero measure. However, if one takes a “pointless” approach and focuses just on the measure algebras , , then these algebras become naturally isomorphic after quotienting out by null sets.
Remark 3 An algebraic probability space captures a bit more structure than a classical probability space, because may be identified with a proper subset of that describes the “regular” functions (or random variables) of the space. For instance, starting with the unit circle (with the usual Haar measure and the usual trace ), any unital subalgebra of that is dense in will generate the same classical probability space on applying the functor , namely one will get the space of homomorphisms from to (with the measure induced from ). Thus for instance could be the continuous functions , the Wiener algebra or the full space , but the classical space will be unable to distinguish these spaces from each other. In particular, the functor loses information (roughly speaking, this functor takes an algebraic probability space and completes it to a von Neumann algebra, but then forgets exactly what algebra was initially used to create this completion). In ergodic theory, this sort of “extra structure” is traditionally encoded in topological terms, by assuming that the underlying probability space has a nice topological structure (e.g. a standard Borel space); however, with the algebraic perspective one has the freedom to have non-topological notions of extra structure, by choosing to be something other than an algebra of continuous functions on a topological space. I hope to discuss one such example of extra structure (coming from the Gowers-Host-Kra theory of uniformity seminorms) in a later blog post (this generalises the example of the Wiener algebra given previously, which is encoding “Fourier structure”).
A small example of how one could use the functors is as follows. Suppose one has a classical probability space with a measure-preserving action of an uncountable group , which is only defined (and an action) up to almost everywhere equivalence; thus for instance for any set and any , and might not be exactly equal, but only equal up to a null set. For similar reasons, an element of the invariant factor might not be exactly invariant with respect to , but instead one only has and equal up to null sets for each . One might like to “clean up” the action of to make it defined everywhere, and a genuine action everywhere, but this is not immediately achievable if is uncountable, since the union of all the null sets where something bad occurs may cease to be a null set. However, by applying the functor , each shift defines a morphism on the associated algebraic probability space (i.e. the Koopman operator), and then applying , we obtain a shift on a new classical probability space which now gives a genuine measure-preserving action of , and which is equivalent to the original action from a measure algebra standpoint. The invariant factor now consists of those sets in which are genuinely -invariant, not just up to null sets. (Basically, the classical probability space contains a Boolean algebra with the property that every measurable set is equivalent up to null sets to precisely one set in , allowing for a canonical “retraction” onto that eliminates all null set issues.)
More indirectly, the functors suggest that one should be able to develop a “pointless” form of ergodic theory, in which the underlying probability spaces are given algebraically rather than classically. I hope to give some more specific examples of this in later posts.
The classical foundations of probability theory (discussed for instance in this previous blog post) is founded on the notion of a probability space – a space (the sample space) equipped with a -algebra (the event space), together with a countably additive probability measure that assigns a real number in the interval to each event.
One can generalise the concept of a probability space to a finitely additive probability space, in which the event space is now only a Boolean algebra rather than a -algebra, and the measure is now only finitely additive instead of countably additive, thus when are disjoint events. By giving up countable additivity, one loses a fair amount of measure and integration theory, and in particular the notion of the expectation of a random variable becomes problematic (unless the random variable takes only finitely many values). Nevertheless, one can still perform a fair amount of probability theory in this weaker setting.
In this post I would like to describe a further weakening of probability theory, which I will call qualitative probability theory, in which one does not assign a precise numerical probability value to each event, but instead merely records whether this probability is zero, one, or something in between. Thus is now a function from to the set , where is a new symbol that replaces all the elements of the open interval . In this setting, one can no longer compute quantitative expressions, such as the mean or variance of a random variable; but one can still talk about whether an event holds almost surely, with positive probability, or with zero probability, and there are still usable notions of independence. (I will refer to classical probability theory as quantitative probability theory, to distinguish it from its qualitative counterpart.)
The main reason I want to introduce this weak notion of probability theory is that it becomes suited to talk about random variables living inside algebraic varieties, even if these varieties are defined over fields other than or . In algebraic geometry one often talks about a “generic” element of a variety defined over a field , which does not lie in any specified variety of lower dimension defined over . Once has positive dimension, such generic elements do not exist as classical, deterministic -points in , since of course any such point lies in the -dimensional subvariety of . There are of course several established ways to deal with this problem. One way (which one might call the “Weil” approach to generic points) is to extend the field to a sufficiently transcendental extension , in order to locate a sufficient number of generic points in . Another approach (which one might dub the “Zariski” approach to generic points) is to work scheme-theoretically, and interpret a generic point in as being associated to the zero ideal in the function ring of . However I want to discuss a third perspective, in which one interprets a generic point not as a deterministic object, but rather as a random variable taking values in , but which lies in any given lower-dimensional subvariety of with probability zero. This interpretation is intuitive, but difficult to implement in classical probability theory (except perhaps when considering varieties over or ) due to the lack of a natural probability measure to place on algebraic varieties; however it works just fine in qualitative probability theory. In particular, the algebraic geometry notion of being “generically true” can now be interpreted probabilistically as an assertion that something is “almost surely true”.
It turns out that just as qualitative random variables may be used to interpret the concept of a generic point, they can also be used to interpret the concept of a type in model theory; the type of a random variable is the set of all predicates that are almost surely obeyed by . In contrast, model theorists often adopt a Weil-type approach to types, in which one works with deterministic representatives of a type, which often do not occur in the original structure of interest, but only in a sufficiently saturated extension of that structure (this is the analogue of working in a sufficiently transcendental extension of the base field). However, it seems that (in some cases at least) one can equivalently view types in terms of (qualitative) random variables on the original structure, avoiding the need to extend that structure. (Instead, one reserves the right to extend the sample space of one’s probability theory whenever necessary, as part of the “probabilistic way of thinking” discussed in this previous blog post.) We illustrate this below the fold with two related theorems that I will interpret through the probabilistic lens: the “group chunk theorem” of Weil (and later developed by Hrushovski), and the “group configuration theorem” of Zilber (and again later developed by Hrushovski). For sake of concreteness we will only consider these theorems in the theory of algebraically closed fields, although the results are quite general and can be applied to many other theories studied in model theory.
Van Vu and I have just uploaded to the arXiv our joint paper “Local universality of zeroes of random polynomials“. This paper is a sequel to our previous work on local universality of eigenvalues of (non-Hermitian) random matrices with independent entries. One can re-interpret these previous results as a universality result for a certain type of random polynomial , namely the characteristic polynomial of the random matrix . In this paper, we consider the analogous problem for a different model of random polynomial, namely polynomials with independent random coefficients. More precisely, we consider random polynomials of the form
where are deterministic complex coefficients, and are independent identically distributed copies of a complex random variable , which we normalise to have mean zero and variance one. For simplicity we will ignore the technical issue that the leading coefficient is allowed to vanish; then has zeroes (counting multiplicity), which can be viewed as a random point process in the complex plane. In analogy with other models (such as random matrix models), we expect the (suitably normalised) asymptotic statistics of this point process in the limit to be universal, in the sense that it is largely independent of the precise distribution of the atom variable .
Our results are fairly general with regard to the choice of coefficients , but we isolate three particular choice of coefficients that are particularly natural and well-studied in the literature:
- Flat polynomials (or Weyl polynomials) in which .
- Elliptic polynomials (or binomial polynomials) in which .
- Kac polynomials in which .
The flat and elliptic polynomials enjoy special symmetries in the model case when the atom distribution is a complex Gaussian . Indeed, the zeroes of elliptic polynomials with complex Gaussian coefficients have a distribution which is invariant with respect to isometries of the Riemann sphere (thus has the same distribution as ), while the zeroes of the limiting case of the flat polynomials with complex Gaussian coefficients are similarly invariant with respect to isometries of the complex plane . (For a nice geometric proof of this facts, I recommend the nice book of Hough, Krishnapur, Peres, and Virag.)
The global (i.e. coarse-scale) distribution of zeroes of these polynomials is well understood, first in the case of gaussian distributions using the fundamental tool of the Kac-Rice formula, and then for more general choices of atom distribution in the recent work of Kabluchko and Zaporozhets. The zeroes of the flat polynomials are asymptotically distributed according to the circular law, normalised to be uniformly distributed on the disk of radius centred at the origin. To put it a bit informally, the zeroes are asymptotically distributed according to the measure , where denotes Lebesgue measure on the complex plane. One can non-rigorously see the scale appear by observing that when is much larger than , we expect the leading term of the flat polynomial to dominate, so that the polynomial should not have any zeroes in this region.
Similarly, the distribution of the elliptic polynomials is known to be asymptotically distributed according to a Cauchy-type distribution . The Kac polynomials behave differently; the zeroes concentrate uniformly on the unit circle (which is reasonable, given that one would expect the top order term to dominate for and the bottom order term to dominate for ). In particular, whereas the typical spacing between zeroes in the flat and elliptic cases would be expected to be comparable to , the typical spacing between zeroes for a Kac polynomial would be expected instead to be comparable to .
In our paper we studied the local distribution of zeroes at the scale of the typical spacing. In the case of polynomials with continuous complex atom disribution , the natural statistic to measure here is the -point correlation function , which for distinct complex numbers can be defined as the probability that there is a zero in each of the balls for some infinitesimal , divided by the normalising factor . (One can also define a -point correlation function in the case of a discrete distribution, but it will be a measure rather than a function in that case.) Our first main theorem is a general replacement principle which asserts, very roughly speaking, that the asymptotic -point correlation functions of two random polynomials will agree if the log-magnitudes have asymptotically the same distribution (actually we have to consider the joint distribution of for several points , but let us ignore this detail for now), and if the polynomials obeys a “non-clustering property” which asserts, roughly speaking, that not too many of the zeroes of can typically concentrate in a small ball. This replacement principle was implicit in our previous paper (and can be viewed as a local-scale version of the global-scale replacement principle in this earlier paper of ours). Specialising the replacement principle to the elliptic or flat polynomials, and using the Lindeberg swapping argument, we obtain a Two Moment Theorem that asserts, roughly speaking, that the asymptotic behaviour of the -point correlation functions depends only on the first two moments of the real and imaginary components of , as long as one avoids some regions of space where universality is expected to break down. (In particular, because does not have a universal distribution, one does not expect universality to hold near the origin; there is a similar problem near infinity.) Closely related results, by a slightly different method, have also been obtained recently by Ledoan, Merkli, and Starr. A similar result holds for the Kac polynomials after rescaling to account for the narrower spacing between zeroes.
We also have analogous results in the case of polynomials with real coefficients (so that the coefficients and the atom distribution are both real). In this case one expects to see a certain number of real zeroes, together with conjugate pairs of complex zeroes. Instead of the -point correlation function , the natural object of study is now the mixed -point correlation function that (roughly speaking) controls how often one can find a real zero near the real numbers , and a complex zero near the points . It turns out that one can disentangle the real and strictly complex zeroes and obtain separate universality results for both zeroes, provided that at least one of the polynomials involved obeys a “weak repulsion estimate” that shows that the real zeroes do not cluster very close to each other (and that the complex zeroes do not cluster very close to their complex conjugates). Such an estimate is needed to avoid the situation of two nearby real zeroes “colliding” to create a (barely) complex zero and its complex conjugate, or the time reversal of such a collision. Fortunately, in the case of real gaussian polynomials one can use the Kac-Rice formula to establish such a weak repulsion estimate, allowing analogues of the above universality results for complex random polynomials in the real case. Among other things, this gives universality results for the number of real zeroes of a random flat or elliptic polynomial; it turns out this number is typically equal to and respectively. (For Kac polynomials, the situation is somewhat different; it was already known that thanks to a long series of papers, starting with the foundational work of Kac and culminating in the work of Ibragimov and Maslova.)
While our methods are based on our earlier work on eigenvalues of random matrices, the situation with random polynomials is actually somewhat easier to deal with. This is because the log-magnitude of a random polynomial with independent coefficients is significantly easier to control than the log-determinant of a random matrix, as the former can be controlled by the central limit theorem, while the latter requires significantly more difficult arguments (in particular, bounds on the least singular value combined with Girko’s Hermitization trick). As such, one could view the current paper as an introduction to our more complicated previous paper, and with this in mind we have written the current paper to be self-contained (though this did make the paper somewhat lengthy, checking in at 68 pages).
The fundamental notions of calculus, namely differentiation and integration, are often viewed as being the quintessential concepts in mathematical analysis, as their standard definitions involve the concept of a limit. However, it is possible to capture most of the essence of these notions by purely algebraic means (almost completely avoiding the use of limits, Riemann sums, and similar devices), which turns out to be useful when trying to generalise these concepts to more abstract situations in which it becomes convenient to permit the underlying number systems involved to be something other than the real or complex numbers, even if this makes many standard analysis constructions unavailable. For instance, the algebraic notion of a derivation often serves as a substitute for the analytic notion of a derivative in such cases, by abstracting out the key algebraic properties of differentiation, namely linearity and the Leibniz rule (also known as the product rule).
Abstract algebraic analogues of integration are less well known, but can still be developed. To motivate such an abstraction, consider the integration functional from the space of complex-valued Schwarz functions to the complex numbers, defined by
where the integration on the right is the usual Lebesgue integral (or improper Riemann integral) from analysis. This functional obeys two obvious algebraic properties. Firstly, it is linear over , thus
for all and . Secondly, it is translation invariant, thus
for all , where is the translation of by . Motivated by the uniqueness theory of Haar measure, one might expect that these two axioms already uniquely determine after one sets a normalisation, for instance by requiring that
This is not quite true as stated (one can modify the proof of the Hahn-Banach theorem, after first applying a Fourier transform, to create pathological translation-invariant linear functionals on that are not multiples of the standard Fourier transform), but if one adds a mild analytical axiom, such as continuity of (using the usual Schwartz topology on ), then the above axioms are enough to uniquely pin down the notion of integration. Indeed, if is a continuous linear functional that is translation invariant, then from the linearity and translation invariance axioms one has
for all and non-zero reals . If is Schwartz, then as , one can verify that the Newton quotients converge in the Schwartz topology to the derivative of , so by the continuity axiom one has
Next, note that any Schwartz function of integral zero has an antiderivative which is also Schwartz, and so annihilates all zero-integral Schwartz functions, and thus must be a scalar multiple of the usual integration functional. Using the normalisation (4), we see that must therefore be the usual integration functional, giving the claimed uniqueness.
Motivated by the above discussion, we can define the notion of an abstract integration functional taking values in some vector space , and applied to inputs in some other vector space that enjoys a linear action (the “translation action”) of some group , as being a functional which is both linear and translation invariant, thus one has the axioms (1), (2), (3) for all , scalars , and . The previous discussion then considered the special case when , , , and was the usual translation action.
Once we have performed this abstraction, we can now present analogues of classical integration which bear very little analytic resemblance to the classical concept, but which still have much of the algebraic structure of integration. Consider for instance the situation in which we keep the complex range , the translation group , and the usual translation action , but we replace the space of Schwartz functions by the space of polynomials of degree at most with complex coefficients, where is a fixed natural number; note that this space is translation invariant, so it makes sense to talk about an abstract integration functional . Of course, one cannot apply traditional integration concepts to non-zero polynomials, as they are not absolutely integrable. But one can repeat the previous arguments to show that any abstract integration functional must annihilate derivatives of polynomials of degree at most :
Clearly, every polynomial of degree at most is thus annihilated by , which makes a scalar multiple of the functional that extracts the top coefficient of a polynomial, thus if one sets a normalisation
for some constant , then one has
for any polynomial . So we see that up to a normalising constant, the operation of extracting the top order coefficient of a polynomial of fixed degree serves as the analogue of integration. In particular, despite the fact that integration is supposed to be the “opposite” of differentiation (as indicated for instance by (5)), we see in this case that integration is basically (-fold) differentiation; indeed, compare (6) with the identity
In particular, we see, in contrast to the usual Lebesgue integral, the integration functional (6) can be localised to an arbitrary location: one only needs to know the germ of the polynomial at a single point in order to determine the value of the functional (6). This localisation property may initially seem at odds with the translation invariance, but the two can be reconciled thanks to the extremely rigid nature of the class , in contrast to the Schwartz class which admits bump functions and so can generate local phenomena that can only be detected in small regions of the underlying spatial domain, and which therefore forces any translation-invariant integration functional on such function classes to measure the function at every single point in space.
The reversal of the relationship between integration and differentiation is also reflected in the fact that the abstract integration operation on polynomials interacts with the scaling operation in essentially the opposite way from the classical integration operation. Indeed, for classical integration on , one has
for Schwartz functions , and so in this case the integration functional obeys the scaling law
In contrast, the abstract integration operation defined in (6) obeys the opposite scaling law
Remark 1 One way to interpret what is going on is to view the integration operation (6) as a renormalised version of integration. A polynomial is, in general, not absolutely integrable, and the partial integrals
diverge as . But if one renormalises these integrals by the factor , then one recovers convergence,
thus giving an interpretation of (6) as a renormalised classical integral, with the renormalisation being responsible for the unusual scaling relationship in (7). However, this interpretation is a little artificial, and it seems that it is best to view functionals such as (6) from an abstract algebraic perspective, rather than to try to force an analytic interpretation on them.
Now we return to the classical Lebesgue integral
As noted earlier, this integration functional has a translation invariance associated to translations along the real line , as well as a dilation invariance by real dilation parameters . However, if we refine the class of functions somewhat, we can obtain a stronger family of invariances, in which we allow complex translations and dilations. More precisely, let denote the space of all functions which are entire (or equivalently, are given by a Taylor series with an infinite radius of convergence around the origin) and also admit rapid decay in a sectorial neighbourhood of the real line, or more precisely there exists an such that for every there exists such that one has the bound
whenever . For want of a better name, we shall call elements of this space Schwartz entire functions. This is clearly a complex vector space. A typical example of a Schwartz entire function are the complex gaussians
where are complex numbers with . From the Cauchy integral formula (and its derivatives) we see that if lies in , then the restriction of to the real line lies in ; conversely, from analytic continuation we see that every function in has at most one extension in . Thus one can identify with a subspace of , and in particular the integration functional (8) is inherited by , and by abuse of notation we denote the resulting functional as also. Note, in analogy with the situation with polynomials, that this abstract integration functional is somewhat localised; one only needs to evaluate the function on the real line, rather than the entire complex plane, in order to compute . This is consistent with the rigid nature of Schwartz entire functions, as one can uniquely recover the entire function from its values on the real line by analytic continuation.
Of course, the functional remains translation invariant with respect to real translation:
However, thanks to contour shifting, we now also have translation invariance with respect to complex translation:
where of course we continue to define the translation operator for complex by the usual formula . In a similar vein, we also have the scaling law
for any , if is a complex number sufficiently close to (where “sufficiently close” depends on , and more precisely depends on the sectoral aperture parameter associated to ); again, one can verify that lies in for sufficiently close to . These invariances (which relocalise the integration functional onto other contours than the real line ) are very useful for computing integrals, and in particular for computing gaussian integrals. For instance, the complex translation invariance tells us (after shifting by ) that
when with , and then an application of the complex scaling law (and a continuity argument, observing that there is a compact path connecting to in the right half plane) gives
using the branch of on the right half-plane for which . Using the normalisation (4) we thus have
giving the usual gaussian integral formula
This is a basic illustration of the power that a large symmetry group (in this case, the complex homothety group) can bring to bear on the task of computing integrals.
One can extend this sort of analysis to higher dimensions. For any natural number , let denote the space of all functions which is jointly entire in the sense that can be expressed as a Taylor series in which is absolutely convergent for all choices of , and such that there exists an such that for any there is for which one has the bound
whenever for all , where and . Again, we call such functions Schwartz entire functions; a typical example is the function
where is an complex symmetric matrix with positive definite real part, is a vector in , and is a complex number. We can then define an abstract integration functional by integration on the real slice :
where is the usual Lebesgue measure on . By contour shifting in each of the variables separately, we see that is invariant with respect to complex translations of each of the variables, and is thus invariant under translating the joint variable by . One can also verify the scaling law
for complex matrices sufficiently close to the origin, where . This can be seen for shear transformations by Fubini’s theorem and the aforementioned translation invariance, while for diagonal transformations near the origin this can be seen from applications of one-dimensional scaling law, and the general case then follows by composition. Among other things, these laws then easily lead to the higher-dimensional generalisation
whenever is a complex symmetric matrix with positive definite real part, is a vector in , and is a complex number, basically by repeating the one-dimensional argument sketched earlier. Here, we choose the branch of for all matrices in the indicated class for which .
Now we turn to an integration functional suitable for computing complex gaussian integrals such as
where is now a complex variable
is the adjoint
is a complex matrix with positive definite Hermitian part, are column vectors in , is a complex number, and is times Lebesgue measure on . (The factors of two here turn out to be a natural normalisation, but they can be ignored on a first reading.) As we shall see later, such integrals are relevant when performing computations on the Gaussian Unitary Ensemble (GUE) in random matrix theory. Note that the integrand here is not complex analytic due to the presence of the complex conjugates. However, this can be dealt with by the trick of replacing the complex conjugate by a variable which is formally conjugate to , but which is allowed to vary independently of . More precisely, let be the space of all functions of two independent -tuples
of complex variables, which is jointly entire in all variables (in the sense defined previously, i.e. there is a joint Taylor series that is absolutely convergent for all independent choices of ), and such that there is an such that for every there is such that one has the bound
whenever . We will call such functions Schwartz analytic. Note that the integrand in (11) is Schwartz analytic when has positive definite Hermitian part, if we reinterpret as the transpose of rather than as the adjoint of in order to make the integrand entire in and . We can then define an abstract integration functional by the formula
thus can be localised to the slice of (though, as with previous functionals, one can use contour shifting to relocalise to other slices also.) One can also write this integral as
and note that the integrand here is a Schwartz entire function on , thus linking the Schwartz analytic integral with the Schwartz entire integral. Using this connection, one can verify that this functional is invariant with respect to translating and by independent shifts in (thus giving a translation symmetry), and one also has the independent dilation symmetry
for complex matrices that are sufficiently close to the identity, where . Arguing as before, we can then compute (11) as
In particular, this gives an integral representation for the determinant-reciprocal of a complex matrix with positive definite Hermitian part, in terms of gaussian expressions in which only appears linearly in the exponential:
This formula is then convenient for computing statistics such as
for random matrices drawn from the Gaussian Unitary Ensemble (GUE), and some choice of spectral parameter with ; we review this computation later in this post. By the trick of matrix differentiation of the determinant (as reviewed in this recent blog post), one can also use this method to compute matrix-valued statistics such as
However, if one restricts attention to classical integrals over real or complex (and in particular, commuting or bosonic) variables, it does not seem possible to easily eradicate the negative determinant factors in such calculations, which is unfortunate because many statistics of interest in random matrix theory, such as the expected Stieltjes transform
which is the Stieltjes transform of the density of states. However, it turns out (as I learned recently from Peter Sarnak and Tom Spencer) that it is possible to cancel out these negative determinant factors by balancing the bosonic gaussian integrals with an equal number of fermionic gaussian integrals, in which one integrates over a family of anticommuting variables. These fermionic integrals are closer in spirit to the polynomial integral (6) than to Lebesgue type integrals, and in particular obey a scaling law which is inverse to the Lebesgue scaling (in particular, a linear change of fermionic variables ends up transforming a fermionic integral by rather than ), which conveniently cancels out the reciprocal determinants in the previous calculations. Furthermore, one can combine the bosonic and fermionic integrals into a unified integration concept, known as the Berezin integral (or Grassmann integral), in which one integrates functions of supervectors (vectors with both bosonic and fermionic components), and is of particular importance in the theory of supersymmetry in physics. (The prefix “super” in physics means, roughly speaking, that the object or concept that the prefix is attached to contains both bosonic and fermionic aspects.) When one applies this unified integration concept to gaussians, this can lead to quite compact and efficient calculations (provided that one is willing to work with “super”-analogues of various concepts in classical linear algebra, such as the supertrace or superdeterminant).
Abstract integrals of the flavour of (6) arose in quantum field theory, when physicists sought to formally compute integrals of the form
where are familiar commuting (or bosonic) variables (which, in particular, can often be localised to be scalar variables taking values in or ), while were more exotic anticommuting (or fermionic) variables, taking values in some vector space of fermions. (As we shall see shortly, one can formalise these concepts by working in a supercommutative algebra.) The integrand was a formally analytic function of , in that it could be expanded as a (formal, noncommutative) power series in the variables . For functions that depend only on bosonic variables, it is certainly possible for such analytic functions to be in the Schwartz class and thus fall under the scope of the classical integral, as discussed previously. However, functions that depend on fermionic variables behave rather differently. Indeed, a fermonic variable must anticommute with itself, so that . In particular, any power series in terminates after the linear term in , so that a function can only be analytic in if it is a polynomial of degree at most in ; more generally, an analytic function of fermionic variables must be a polynomial of degree at most , and an analytic function of bosonic and fermionic variables can be Schwartz in the bosonic variables but will be polynomial in the fermonic variables. As such, to interpret the integral (14), one can use classical (Lebesgue) integration (or the variants discussed above for integrating Schwartz entire or Schwartz analytic functions) for the bosonic variables, but must use abstract integrals such as (6) for the fermonic variables, leading to the concept of Berezin integration mentioned earlier.
In this post I would like to set out some of the basic algebraic formalism of Berezin integration, particularly with regards to integration of gaussian-type expressions, and then show how this formalism can be used to perform computations involving GUE (for instance, one can compute the density of states of GUE by this machinery without recourse to the theory of orthogonal polynomials). The use of supersymmetric gaussian integrals to analyse ensembles such as GUE appears in the work of Efetov (and was also proposed in the slightly earlier works of Parisi-Sourlas and McKane, with a related approach also appearing in the work of Wegner); the material here is adapted from this survey of Mirlin, as well as the later papers of Disertori-Pinson-Spencer and of Disertori.
[These are notes intended mostly for myself, as these topics are useful in random matrix theory, but may be of interest to some readers also. -T.]
One of the most fundamental partial differential equations in mathematics is the heat equation
where is a scalar function of both time and space, and is the Laplacian . For the purposes of this post, we will ignore all technical issues of regularity and decay, and always assume that the solutions to equations such as (1) have all the regularity and decay in order to justify all formal operations such as the chain rule, integration by parts, or differentiation under the integral sign. The factor of in the definition of the heat propagator is of course an arbitrary normalisation, chosen for some minor technical reasons; one can certainly continue the discussion below with other choices of normalisations if desired.
In probability theory, this equation takes on particular significance when is restricted to be non-negative, and furthermore to be a probability measure at each time, in the sense that
for all . (Actually, it suffices to verify this constraint at time , as the heat equation (1) will then preserve this constraint.) Indeed, in this case, one can interpret as the probability distribution of a Brownian motion
where is a stochastic process with initial probability distribution ; see for instance this previous blog post for more discussion.
A model example of a solution to the heat equation to keep in mind is that of the fundamental solution
defined for any , which represents the distribution of Brownian motion of a particle starting at the origin at time . At time , represents an -valued random variable, each coefficient of which is an independent random variable of mean zero and variance . (As , converges in the sense of distributions to a Dirac mass at the origin.)
The heat equation can also be viewed as the gradient flow for the Dirichlet form
since one has the integration by parts identity
for all smooth, rapidly decreasing , which formally implies that is (half of) the negative gradient of the Dirichlet energy with respect to the inner product. Among other things, this implies that the Dirichlet energy decreases in time:
For instance, for the fundamental solution (3), one can verify for any time that
(assuming I have not made a mistake in the calculation). In a similar spirit we have
Since is non-negative, the formula (6) implies that is integrable in time, and in particular we see that converges to zero as , in some averaged sense at least; similarly, (8) suggests that also converges to zero. This suggests that converges to a constant function; but as is also supposed to decay to zero at spatial infinity, we thus expect solutions to the heat equation in to decay to zero in some sense as . However, the decay is only expected to be polynomial in nature rather than exponential; for instance, the solution (3) decays in the norm like .
Since , we also observe the basic cancellation property
There are other quantities relating to that also decrease in time under heat flow, particularly in the important case when is a probability measure. In this case, it is natural to introduce the entropy
Thus, for instance, if is the uniform distribution on some measurable subset of of finite measure , the entropy would be . Intuitively, as the entropy decreases, the probability distribution gets wider and flatter. For instance, in the case of the fundamental solution (3), one has for any , reflecting the fact that is approximately uniformly distributed on a ball of radius (and thus of measure ).
A short formal computation shows (if one assumes for simplicity that is strictly positive, which is not an unreasonable hypothesis, particularly in view of the strong maximum principle) using (9), (5) that
where is the square root of . For instance, if is the fundamental solution (3), one can check that (note that this is a significantly cleaner formula than (7)!).
In particular, the entropy is decreasing, which corresponds well to one’s intuition that the heat equation (or Brownian motion) should serve to spread out a probability distribution over time.
Actually, one can say more: the rate of decrease of the entropy is itself decreasing, or in other words the entropy is convex. I do not have a satisfactorily intuitive reason for this phenomenon, but it can be proved by straightforward application of basic several variable calculus tools (such as the chain rule, product rule, quotient rule, and integration by parts), and completing the square. Namely, by using the chain rule we have
valid for for any smooth function , we see from (1) that
and thus (again assuming that , and hence , is strictly positive to avoid technicalities)
We thus have
It is now convenient to compute using the Einstein summation convention to hide the summation over indices . We have
and
By integration by parts and interchanging partial derivatives, we may write the first integral as
and from the quotient and product rules, we may write the second integral as
Gathering terms, completing the square, and making the summations explicit again, we see that
and so in particular is always decreasing.
The above identity can also be written as
Exercise 1 Give an alternate proof of the above identity by writing , and deriving the equation for .
It was observed in a well known paper of Bakry and Emery that the above monotonicity properties hold for a much larger class of heat flow-type equations, and lead to a number of important relations between energy and entropy, such as the log-Sobolev inequality of Gross and of Federbush, and the hypercontractivity inequality of Nelson; we will discuss one such family of generalisations (or more precisely, variants) below the fold.
Let be a large natural number, and let be a matrix drawn from the Gaussian Unitary Ensemble (GUE), by which we mean that is a Hermitian matrix whose upper triangular entries are iid complex gaussians with mean zero and variance one, and whose diagonal entries are iid real gaussians with mean zero and variance one (and independent of the upper triangular entries). The eigenvalues are then real and almost surely distinct, and can be viewed as a random point process on the real line. One can then form the -point correlation functions for every , which can be defined by duality by requiring
for any test function . For GUE, which is a continuous matrix ensemble, one can also define for distinct as the unique quantity such that the probability that there is an eigenvalue in each of the intervals is in the limit .
As is well known, the GUE process is a determinantal point process, which means that -point correlation functions can be explicitly computed as
for some kernel ; explicitly, one has
where are the (normalised) Hermite polynomials; see this previous blog post for details.
Using the asymptotics of Hermite polynomials (which then give asymptotics for the kernel ), one can take a limit of a (suitably rescaled) sequence of GUE processes to obtain the Dyson sine process, which is a determinantal point process on the real line with correlation functions
where is the Dyson sine kernel
A bit more precisely, for any fixed bulk energy , the renormalised point processes converge in distribution in the vague topology to as , where is the semi-circular law density.
On the other hand, an important feature of the GUE process is its stationarity (modulo rescaling) under Dyson Brownian motion
which describes the stochastic evolution of eigenvalues of a Hermitian matrix under independent Brownian motion of its entries, and is discussed in this previous blog post. To cut a long story short, this stationarity tells us that the self-similar -point correlation function
obeys the Dyson heat equation
(see Exercise 11 of the previously mentioned blog post). Note that vanishes to second order whenever two of the coincide, so there is no singularity on the right-hand side. Setting and using self-similarity, we can rewrite this equation in time-independent form as
One can then integrate out all but of these variables (after carefully justifying convergence) to obtain a system of equations for the -point correlation functions :
where the integral is interpreted in the principal value case. This system is an example of a BBGKY hierarchy.
If one carefully rescales and takes limits (say at the energy level , for simplicity), the left-hand side turns out to rescale to be a lower order term, and one ends up with a hierarchy for the Dyson sine process:
Informally, these equations show that the Dyson sine process is stationary with respect to the infinite Dyson Brownian motion
where are independent Brownian increments, and the sum is interpreted in a suitable principal value sense.
I recently set myself the exercise of deriving the identity (3) directly from the definition (1) of the Dyson sine process, without reference to GUE. This turns out to not be too difficult when done the right way (namely, by modifying the proof of Gaudin’s lemma), although it did take me an entire day of work before I realised this, and I could not find it in the literature (though I suspect that many people in the field have privately performed this exercise in the past). In any case, I am recording the computation here, largely because I really don’t want to have to do it again, but perhaps it will also be of interest to some readers.
Recent Comments